I’ve moved to a new URL!

Say hello to https://thaumatorium.com/!

I’ve been working on it since early this year and I think it’s come far enough to ‘advertise’ it here. Fully handcrafted, which means it should be quite a bit faster than this slow-ass piece of junk. I can’t expect much from WordPress…

Oh well, off to the new site you go! I’ve created new articles, new projects, created a Knowledge Base (KB), a nice about page where I can rant my butt off, if I want to.


How to get pagespeed information on the commandline

edit: my new site is here: https://thaumatorium.com/

This curls the google API and inserts the JSON data into a file that’s the current date and time in ISO8601 format. Great if you want to keep track of your performance over time.

To keep Windows compliant I’ve had to remove the colons, which meant removing the dashes too, to keep ISO8601 compliant.

curl https://www.googleapis.com/pagespeedonline/v5/runPagespeed\?url\
=[WEBSITE URL GOES HERE] > $(date -u +"%Y%m%dT%H%M%SZ").json


Create a sitemap(.xml) and robots.txt

edit: my new site is here: https://thaumatorium.com/

For several reasons:

  • So search engines (other than Google) can easily index your site
  • So any curious eyes can satisfy their nerdy needs.
  • Great for crawling your site, so you can find all CSS classes from all pages, to reduce your CSS InnerCore/OuterCore files to a bare minimum.

Initially create one manually to layout the website during the prototype/early design days.

Make sure to include the necessary tags


Splitting JS/CSS into 2 or 3 files

edit: my new site is here: https://thaumatorium.com/

Context: Checking Coverage with Chrome Devtools – https://developers.google.com/web/tools/chrome-devtools/coverage

After checking coverage, split the JS/CSS up into:

  • InnerCore (inline JS/CSS, needed for when the JS/CSS files don’t load on super slow connections)
  • OuterCore (the JS/CSS that’s covered by Devtools)
  • Mantle (the JS/CSS that’s NOT covered by Devtools, but can be potentially used by the site)

Do all of this programmatically, because you do NOT want to do this by hand.

Names are based on Earth: https://en.wikipedia.org/wiki/Earth%27s_inner_core#/media/File:Earth_poster.svg

Why I hate Javascript

edit: my new site is here: https://thaumatorium.com/

for/in is meant for iterating the properties of an object.

That’s right! for(item in list) {/*whatever*/} isn’t meant to loop over the items in the list. IT’S FOR LOOPING OVER THE PROPERTIES. IT’S THE ONLY LANGUAGE THAT DOES THIS. FUCKING. WHY!???



Here’s something to watch to hate JS even more: https://www.destroyallsoftware.com/talks/wat

Spark files and taking notes

edit: my new site is here: https://thaumatorium.com/

For those that don’t know what a Spark file is:
It’s basically a file (be it in Evernote, Google Keep, or in my case: a MarkDown text file, stored in Dropbox for safekeeping) to dump random ideas in. Once those ideas are dumped, you can look at them at a later date to see if there’s any merit to them.

Because every time you’ve got a new idea, it’s the best idea in the world! The problem is that you’ve hyped yourself up with your own idea (“don’t get high on your own supply”) and you might not be able to see what may be wrong with it, so writing it down to look at it on a later date is the perfect way to self-critique.

Now, here’s how I control my ideas:
If I’m behind my PC, where I have access to my several *.spark.md files, I’ll just access them directly.
If I’m on the move, I’ll just either mail myself the idea, or dump it in Google Keep (on my phone), where I can move later it into the correct text file.

Why are they *.spark.md? So I can easily find them with Everything. I’ve got a shortcut (Alt+Z, under Options > General > Keyboard -> Toggle window Hotkey) to toggle the window and the setting to close the window once I’ve opened a file (Options > General > Results -> Close Window on execute).

I open the file with Visual Studio Code because of the amazing Extension support it has. I’m using markdownlint (for keeping a consistent MarkDown file), Markdown Preview Enhanced (to see how the file compiles) and English Support for LanguageTool (as an alternative for Grammarly, since they : 1. Don’t support anything else other than English, and 2. Have no extension for vscode). It’s a neato combo!

The things that I place in my spark files are usually about programming (little features that I want to add to this project that I’ve got in my head for the last… 5 years now? Ever since 2014, I believe. It’s meant to take on reddit, because reddit has slowly turned into a pleb-tier community with only shitty puns and retarded communities. If you’re clutching your perls because I used the word retarded: My site is not for you, because I don’t deem the word retarded as bad. It literally means “held back” and if you think that’s a ‘naughty no-no word’, you’re retarded too. Please do fuck off back to reddit. /rant)

Checking on those ideas now and then has given me a few well-supported ideas over the years. Can’t spill them here, because I want to eventually make money with them. :^)

Anyway, I just wanted to share this with you because this process really helped me keep my ideas on paper in a somewhat structural way. I hope it will help you too.

PS: The Spark File is a blog post, originally from 2012: https://medium.com/the-writers-room/the-spark-file-8d6e7df7ae58 (if that link ever dies: It’s also available on web.archive.org), which is the source of this idea.

Haskell’s fold functions explained

edit: my new site is here: https://thaumatorium.com/

Protip: Use https://repl.it/ to run your own little test programs, yes they support other languages than Haskell too.

Haskell’s fold functions are Higher Order and Recursive functions (if you’ve read Types (or classes) of Haskell¬†functions, you’ll know what that is) where it takes a function, a first/final item (more on this later), a list of things and returns a single reduces item.


Here is the (Wikipedia) definition of foldr (pronounced as “fold right”):

foldr :: (a -> b -> b) -> b -> [a] -> b
foldr f z []     = z
foldr f z (x:xs) = f x (foldr f z xs)

There are two cases:

  • The base case (where the input list is empty)
  • The general case (where the input list is not¬†empty), one items gets reduced and the function is recursively called on¬† the rest of the list.

Now lets run foldr:

foldr (+) 0 [1,2,3,4]

As you notice the + operator is placed in parentheses. This is to prevent the direct application of +. When you’re actually run command above without the parentheses you’ll see you’ll get an error. (try it on the pre-mentioned https://repl.it/ website!)

The answer is 10, as 1 + 2 + 3 + 4 = 10, but what’s that 0 doing there? That is the identity value for +. I haven’t written an article about the identity value for operators (or functions), but here’s what you need to know: The identity value of a function means that if you put any other value in, you’ll get that value back out: x + 0 = x.

Other operators may have different values:
Subtraction (-) has 0 (x – 0 = x) as identity
Multiplication (*) has 1 (multiplying by 0 would always give you 0, which is unwanted behavior) as identity
Division (/) has 1 as identity
Exponentiation (^) (aka “the power operator”) also has 1 as identity value.
f() where f outputs a list has the empty list ([]) as identity

But how is 10 calculated? Since the function is called fold right, we know two things: all grouped parentheses (more on that below the executed code) are on the right and the identity value will be the last value inserted.

If I run the code by hand (if you’ve ever followed a Logic course, you’ll recognize this as¬†induction)¬†we’ll get the following execution:

foldr (+) 0 [1,2,3,4]
= { apply foldr, since the input is not an empty list we apply the general case }
(+) 1 (foldr (+) 0 [2,3,4])
= { apply foldr, ditto as before }
(+) 1 ((+) 2 (foldr (+) 0 [3,4]))
= { apply foldr, ditto as before }
(+) 1 ((+) 2 ((+) 3 (foldr (+) 0 [4])))
= { apply foldr, ditto as before }
(+) 1 ((+) 2 ((+) 3 ((+) 4 (foldr (+) 0 []))))
= { apply foldr, but since the list is now empty, return the identity value instead! }
(+) 1 ((+) 2 ((+) 3 ((+) 4 0)))
= { apply the most inner + operator }
(+) 1 ((+) 2 ((+) 3 4))
= { apply the most inner + operator }
(+) 1 ((+) 2 7)
= { apply the most inner + operator }
(+) 1 9
= { apply the most last + operator }

Now when you take a look at the moment all foldrs have been applied, you may see you can rewrite that line from:

(+) 1 ((+) 2 ((+) 3 ((+) 4 0)))


1 + (2 + (3 + (4 + 0)))

which is more readable (IMO). Now, for this instance, the order of execution doesn’t matter at all, but there are certain operators (like subtraction and division) where the order¬†does matter! 1/2 = .5, whereas¬† 2/1 = 2

Lets say we execute the next line, what will the answer be?

foldr (-) 0 [1,2,3,4]

Lets simplify the executed code above to that cleaned up line right below it:

1 - (2 - (3 - (4 - 0)))

As I’ve mentioned before: all grouped parentheses are grouped on the right (because fold¬†right) and the identity value is¬†also on the right.
When we run this code we get:

1 - (2 - (3 - (4 - 0)))
= { apply the most inner - }
1 - (2 - (3 - 4))
= { apply the most inner -. Negative values must be wrapped in parentheses }
1 - (2 - (-1))
= { apply the most inner -. Subtracting a negative number is the same as adding it  }
1 - 3
= { apply the last - }


Definition (again from Wikipedia):

foldl :: (b -> a -> b) -> b -> [a] -> b
foldl f z [] = z
foldl f z (x:xs) = foldl f (f z x) xs

What is different? The input function has its first types flipped and in the general case you can see that the application of f is moved into location of what was the identity value.

Lets run the same code we did last time:

foldl (-) 0 [1,2,3,4]
= { apply foldl. Again the list isn't empty, so we'll apply the general case }
foldl (-) ((-) 0 1) [2,3,4]
= { apply foldl. ditto }
foldl (-) ((-) ((-) 0 1) 2) [3,4]
= { apply foldl. ditto }
foldl (-) ((-) ((-) ((-) 0 1) 2) 3) [4]
= { apply foldl. ditto }
foldl (-) ((-) ((-) ((-) ((-) 0 1) 2) 3) 4) []
= { apply foldl. Again, the list is now empty, so apply the base case }
(-) ((-) ((-) ((-) 0 1) 2) 3) 4
= { apply the most inner - }
(-) ((-) ((-) (-1) 2) 3) 4
= { apply the most inner - }
(-) ((-) (-3) 3) 4
= { apply the most inner - }
(-) (-6) 4
= { apply the last - }

Now, after you’ve applied all foldls, we, again, can clean up that code:

(-) ((-) ((-) ((-) 0 1) 2) 3) 4
(((0 - 1) - 2) - 3) - 4

There, much more readable! As you’ll notice, the grouped parentheses are now all¬†left (from the name fold¬†left), ditto for the identity value.

Sometimes you have to rewrite code to make it make sense in Haskell.
It’s a sad fact of life.
Anyway, if we reduce this cleaned up code we’ll get:

(((0 - 1) - 2) - 3) - 4
= { apply most inner - }
(((-1) - 2) - 3) - 4
= { apply most inner - }
((-3) - 3) - 4
= { apply most inner - }
(-6) - 4
= { apply most inner - }

I think you’ve started seeing a pattern with Foldl/Foldr right about now: all parentheses are with the identity value either all left or all right – that’s how I’ve been able to easily write out most code by hand (I had to use VSCode with the Bracket Pair Colorizer 2 addon with the¬†foldl (-) 0 [1,2,3,4] code to check my parentheses :p )

If you’ve got any questions (especially when you don’t understand a part), let me know down below!

Alternative explanation

With either foldl or foldr, the text of the list gets manipulated until something executable is produced:

foldl (-) 0 [1,2,3,4]
= { add identity value to the left of the list (because fold left) }
foldl (-) [0,1,2,3,4]
= { replace all commas with the function given }
foldl [0-1-2-3-4]
= { as last, apply the parentheses, grouped to the left (because fold left). The length of the grouped parentheses is the original length of the list - 1 }
foldr (-) 0 [1,2,3,4]
= { add identity value to the right of the list (because fold right)
foldr (-) [1,2,3,4,0]
= { replace all commas with the function given }
foldr [1-2-3-4-0]
= { as last, apply the parentheses, grouped to the right (because fold right). The length of the grouped parentheses is the original length of the list - 1 }

edit: my new site is here: https://thaumatorium.com/

Quick ‘n Dirty Big O


edit: my new site is here: https://thaumatorium.com/

I’m not a mathematician, so it’s not going to be exact, but I’ll try to give you an intuitive understanding of Big O

Big O (usually written as O(x), where x is a mathematical equation like 1, n, n^2 or 2^n, for example) is a mathematical expression which expresses how many steps an algorithm maximally will take to run.

Example 1:

If you have an array/list and want to lookup the nth item, it’ll maximally take a so-called “constant time”, expressed as O(1). This does not mean it literally takes 1 step, but it means that no matter how many items you have in your data structure, the amount of steps it takes to retrieve the nth item is always a constant amount (whether it’s actually 1 step or 10).

Example 2:

A lookup for a Tree structure takes up O(log n) steps, because you can (assuming the tree is sorted) eliminate the other branch every time you go left or right in the tree (see image below – If you choose 9, instead of 3, you won’t need to check the 1 and 4)

Image nicked from http://cslibrary.stanford.edu/110/BinaryTrees.html

Note that you may execute a few constant steps before that (like creating a variable to save the answer in, so you can return that), but since O(log n)’s runtime overshadows that of any O(1) code, we only note the O(log n) part

Example 3:

If you want to find whether your array/list contains an a certain item (assuming it’s an unsorted list – a “does list contain x” check, if you will), it’ll take O(n) steps, because every item has to be checked once to see if that’s the item you’re looking for.

If you have a list with n=10 (meaning 10 items in your list) and the last item is the one you’re looking for, it’ll take 10 steps, but this is not necessarily so, as I’ve said in the beginning: Big O is how many steps an algorithm maximally will take to run.

Again, if your algorithm has a part that happens to be O(1) or O(log n) (just as an example), you still note it down as O(n), because O(n) overshadows the runtime of O(1) and O(log n). Say you do, for some unknown reason, want to jot down the exact Big O, you’d write it down as O(log n + n), but that’s not what you’ll see in practice. You can see what I mean, visually speaking, under the “Visually speaking” paragraph. :^)

Most of the Big O notations:

I think you’re getting the gist now, so I’ve copied a small, simplified, list of possible Big O expressions from Wikipedia. It’s sorted from shortest runtime to longest. The italic rows are the ones I’ve seen most common.

n is the amount of items that’s fed to the algorithm, c is a constant number for that specific algorithm (usually it’s either 2 or 3, though I don’t guarantee it)

constant – Think a C array lookup, like list[2]
O(log‚Ā° log‚Ā° n)
double logarithmic
O(log ‚Ā°n)logarithmic – Think a lookup in a sorted, binary, tree
O((log‚Ā° n)^c)polylogarithmic
O(n^c)fractional power
O(n)linear – Think ‘for’ loops
O(n log ‚Ā°n) = O(log ‚Ā°n!)
linearithmic, loglinear, or quasilinear
O(n^2)quadratic – Think ‘double for’ loop
polynomial or algebraic

For the full table: https://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions
Another great wiki article is https://en.wikipedia.org/wiki/Time_complexity

Visually speaking:

Anything after O(n) should be tried to be avoided (though some algorithms, like turning two lists into Cartesian product/table, or perhaps a naive implementation of generating Fibonacci number for example) are currently not faster than say 2^n (though there are faster implementations known for the Fibonacci number generator), which means you can’t always avoid it. As far as I know, we don’t mathematically know if it’s provable to find a faster algorithm for current algorithms.

You can see why you should avoid anything >O(n) here:

Source: https://en.wikipedia.org/wiki/Time_complexity#/media/File:Comparison_computational_complexity.svg

A bit of trivia:

There’s a thing called P=NP, which states “It asks whether every problem whose solution can be quickly verified can also be solved quickly”. In concrete form: A Sudoku puzzle can be verified to be correct in a relatively short time, yet not solved in a relatively short time. P basically means “polynomial” and NP “exponential” or even “factorial”. Do note that this is not mathematically correct, but I’m not also going into what the hell “nondeterministic polynomial time” is. That’s a bit out of the scope of this tutorial.

Anyway, this P=NP think is currently still a mathematically unsolved problem within Computer Science, so maybe (if you’re smarter than me) you can possibly solve this in the future.

Closing words

I hope this short intro into Big O has been (at least a little bit) of help to you ūüôā

PS: Note that there are other so called asymptotic notations (of which Big O is one), like o (little-o), ő© (Big-Omega), ŌČ(little-omega, and őė (theta). I have to confess that I have no idea what those represent, but I’m guessing things like average runtime and minimal runtime or something like that, but those mathematicians have to fuck everything up for me by using words I don’t understand ;_;

Fucking mathematics, how does it work?

edit: my new site is here: https://thaumatorium.com/