# Alif Wahid

Alas, Tumblr will be no more :’(

And with that, I will cease to be here in the coming weeks.

My principles are not up for sale!

My personal domain (alifwahid.me) expires in July. So I might renew it (or maybe I won’t) and start my own blog elsewhere (most likely as an awesome couchapp served by a CouchDB instance hosted by either Cloudant or Iriscouch).

Goodbye Tumblr! You’ve been great to me.

I will miss you :(

Hahaha…too cute!

### Will Yahoo Try to Get Its “Cool Again” by Doing a Deal for Tumblr?

Oh man, I mainly use Tumblr because of its INDEPENDENCE from these moronic internet giants :P I suppose it’s time to start hunting for a new host :/

Infotainment science journalism appears to operate under the assumption that if a scientific paper has been peer-reviewed and published by conscientious scientists, the results and conclusions are valid. The peer-review process is equated with a “fact checker” role, thus allowing infotainment science journalism to promote the perspectives of the researchers who conducted the studies.

Critical science journalism takes a different approach and focuses on providing a balanced assessment of the work, one that highlights specific strengths but also emphasises specific limitations or flaws. It is no big secret that the majority of research findings published in peer-reviewed scientific journals will probably not hold up when other groups attempt to replicate them. This lack of replicability can be due to research misconduct, systematic errors or other cognitive biases, which commonly occur even in the most conscientious and meticulous scientists.

- Jalees Rehman in a column for the Guardian titled “The need for critical science journalism”. Perfectly articulated and he ends with 4 sceptical criteria for distinguishing rubbish infotainment from critical journalism that are dead obvious only AFTER someone else has pointed them out to you :P Because that’s how the human brain works, even for utter geniuses ;)

I cannot understand why we idle discussing religion. If we are honest—and scientists have to be—we must admit that religion is a jumble of false assertions, with no basis in reality. The very idea of God is a product of the human imagination. It is quite understandable why primitive people, who were so much more exposed to the overpowering forces of nature than we are today, should have personified these forces in fear and trembling. But nowadays, when we understand so many natural processes, we have no need for such solutions. I can’t for the life of me see how the postulate of an Almighty God helps us in any way. What I do see is that this assumption leads to such unproductive questions as why God allows so much misery and injustice, the exploitation of the poor by the rich and all the other horrors He might have prevented. If religion is still being taught, it is by no means because its ideas still convince us, but simply because some of us want to keep the lower classes quiet. Quiet people are much easier to govern than clamorous and dissatisfied ones. They are also much easier to exploit. Religion is a kind of opium that allows a nation to lull itself into wishful dreams and so forget the injustices that are being perpetrated against the people. Hence the close alliance between those two great political forces, the State and the Church. Both need the illusion that a kindly God rewards—in heaven if not on earth—all those who have not risen up against injustice, who have done their duty quietly and uncomplainingly. That is precisely why the honest assertion that God is a mere product of the human imagination is branded as the worst of all mortal sins.

- Paul Dirac apparently said this at the 1927 Solvay Conference.

### Works for me!

Ah yes, sigh, that impenetrable shield of self-centred relativism.

X: Hey, how come YouTube freezes every time Emily Graslie waves her hands?

Me: Works for me!

Y: Yo, why doesn’t my yo-yo resonate in synchrony with Yo-Yo Ma?

Me: Works for me!

Z: Anyone else see a warning from GCC v4.6.3 on Linux v3.2.0 about the return value of fscanf being ignored when using -O switch?

Me: Works for me!

AARGH!! >.< What is that supposed to mean, really? I mean, which version of you does it work for, exactly? Is it the one that answers without knowing what is being asked? Or is it the one that knows but answers incorrectly, anyway? Or is there some other version of you in between that neither knows nor answers, except to mutter “Works for me!” involuntarily? I would really like to know!

### Negative Absolute Temperatures

I’ve only gotten round to reading a Science journal article from a few months back that caused a lot of media buzz. It was about negative absolute temperatures, which sounds like presumably going below absolute zero (an impossibility in all senses of the word). Anyway, here’s the full-text PDF on arXiv.org for anyone interested. HIGHLY recommended reading, this is, since it’s a beautiful piece of experimental physics that will blow your mind!

As it turns out, it actually has nothing to do with going below absolute zero! Instead, it’s a symmetric matter of inverting the sign of temperature in Boltzmann factor so that more particles occupy higher energy states than lower energy states. In essence, it’s a question of how can more energy be packed into a thermodynamic system after it has reached infinite temperature? Because at that point every energy state is occupied with equal probability such that the same average number of particles would be found in every energy state. Hence, the total energy is finite despite the infinite temperature :\ My head hurts :’(

But this finite total energy is still not the maximum possible! So the theory then goes that if the probabilities can be pushed such that fewer particles occupy lower energy states and more particles occupy higher energy states, then the total energy will be larger than what it was before at infinite temperature! And the way to do this (in theory at least) is to invert the sign of Boltzman factor so that the probability distribution now exponentially grows, as opposed to exponentially decaying before.

Anyway, what all of this means is that negative absolute temperatures are HOT because a system with negative temperature has humongous amount of energy stored inside (relatively speaking, of course). So the authors of this paper devised clever ways to trap bosons into a negative temperature zone so that they have greater probability of being in a higher energy state than in the relatively lower energy states. Thus the total energy is increased to more than what would be possible if one were to simply heat up the bosons (so to speak) until they reached infinite temperature the old fashioned way :P

It’s amazing to ponder the role of symmetry in all of physics (and consequently, all of nature). We didn’t evolve to grasp such beauty intuitively/quickly. It takes a lot of thinking to appreciate the shear simplicity with which the universe goes about its business. The idea that total energy can be finite despite a system having infinite temperature is mind blowing to say the least. And thanks to symmetry, the temperature can then turn around to be negative just so that the finite total energy still has a chance to increase further. In fact, the maximum energy obtainable in theory happens to be when this negative temperature rolls all the way around to absolute zero - from the other side :P

The English word brother and the French frère are related to the Sanskrit bhratr and the Latin frater, suggesting that words as mere sounds can remain associated with the same meaning for millennia. But how far back in time can traces of a word’s genealogical history persist, and can we predict which words are likely to show deep ancestry?

- Opening sentence from a sort of fascinating paper on linguistics. Full-text PDF available here. Making a bit of media buzz currently, is this paper. Haven’t fully understood their statistical methodology. So not sure yet.

### James Burke Documentaries FULL

I’ve stopped watching TV for quite some time now, since there’s a lifetime’s worth of historical documentaries to catch up on. Starting with James Burke’s three incarnations of “Connections”.

Naming of certain things in mathematics and statistics is rather odd indeed. For example, there is a stochastic model called the Chinese Restaurant Process. I’m betting that in order to ensure China’s populous neighbour didn’t get upset at being refused entry into a non-existent statisticians-only restaurant, someone invented a complementary model and named it the Indian Buffet Process. Is there an Australian BBQ Process that I’m unaware of? But the one that firmly impaled my mind into the fence post was Hairy Ball Theorem from algebraic topology :P Who dreams up a name like that without considering the grave injuries that it may inflict on the reader’s mind?

Hmm…Ben Kingsley is apparently playing a half-Maori war hero named Mazer Rackham in the upcoming Ender’s Game movie…GO NEW ZEALAND!!! Wohoo! :) Oh and I should probably get round to reading that book someday before the movie comes out :/ Or, perhaps I can just wait for the movie to come out :P lol

(Source: nzherald.co.nz)

I hear this is en route to smash records all over the place. Seriously groovy stuff!

### Distribution of Decimal Digits in Tiny Hyper-Exponentiated Natural Numbers

When dealing with impractically large numbers (e.g., dozen or more digits in length), it certainly helps to inject an overdose of relativity by re-defining what is tiny, what is small and what is larger than both of those :P Hyper-exponentiation seems to be a marvellous trick for doing so. For instance, $$3^{3^3}$$ is an integer that’s 13 digits long, just count for yourself: 7625597484987. Now try this little beauty: $$4^{4^4}$$, which is a whopping 155 digits long! Here, I’ll even paste it below just for your counting pleasure :P

13407807929942597099574024998205846127479365820592393377723561443721764030073546976801874298166903427690031858186486050853753882811946569946433649006084096

The point is that symbolic compaction/conciseness is achieved with this notation of raising a power to another power… (i.e., hyper-exponentiation). This can go on inductively all the way to infinity. But lets stop well short of that monster. Hence, my choice of tiny hyper-exponentiated natural number (or THENN for short). THENNs are certainly not literally tiny numbers. Rather, they are tiny as far as hyper-exponentiation is concerned, since I’m only taking one level of hyper-exponentiation in the form of $$a^{b^c}$$, where $$a, b, c$$ are all natural numbers under ten (i.e., integers greater than zero and below ten).

So, the 155 digit number above is rather interesting to look at without bothering to grasp its enormity. Given that I printed that number in decimal format using ten unique digits from 0 to 9, it’s an interesting statistical question as to how are these digits distributed? The answer turns out to be a neat theorem called Normal Numbers. Anyway, ignoring the answer for now :P I figured I’ll do some empirical testing and plot the distribution of these ten decimal digits in a bunch of THENNs. So here’s a plot of the distribution for $$5^{5^5}$$ just to get the ball rolling (python code).

No point in trying to print the full 2185 digits of this tiny number :P Instead my python script compiled a histogram of the distribution of digits from 0 to 9. I’ve plotted this above by first dividing individual frequencies by the sum of all frequencies and then subtracting the mean from each frequency (which is the jargon at the bottom: Mean Centred Normalised Frequency). So this funky, rotated histogram is just showing which digits occurred less frequently than the average, and which ones occurred more frequently than the average in terms of fractional proportions.

The scales are misleading in this plot, since it appears at a glance as though that 7 is hardly ever present in this number compared to the average, and 3 is far above the average! But no, all ten digits are incredibly close to the average frequency since they are within 1% on either side of the mean! Thus for all practical purposes, this just shows that all ten digits appear with equal frequency in this number containing 2185 digits. That means the probability of any given digit occurring is almost equal to any other digit occurring, and this converges towards a flat uniform distribution as the length of a THENN goes to infinity. Incidentally, this is the general idea of Normal Numbers. That is, any THENN (or larger numbers) will have digits equally distributed according to a flat uniform density function.

One THENN is clearly not enough to get a converging sense of something so general as Normal Numbers :P So I calculated 160 different THENNs for reasonable sample size. The largest THENN was over 260 thousand digits in length (no way I would print that on screen :P). The smallest THENN was 9 digits in length (which is in the order of several hundred million, so it’s still not literally tiny but a THENN). That resulted in 160 separate data points corresponding to each decimal digit in the above fashion of Mean Centred Normalised Frequencies. Then I plotted the cumulative distribution function of all the digits as follows (python code).

Note how all the distributions are very steep at 0 (i.e., the mean) and the standard deviations are literally small and near identical across all ten digits. Again, due to the axes requiring zoomed in views for actually seeing the lines, it is misleading just how steep all these S-curves really are. Another way to look at these would be the individual derivatives of the above cumulative distribution (which would be a classic bell-curve shaped probability density), but that obviously requires 10 separate plots :P So I’ve put one below for demonstration only (corresponding to the digit 3).

Without adding any smoothing to the above plot it’s easy to see that this is a very narrow distribution, where the overwhelming majority of the values are clustered around the mean and the variance is literally tiny. Hence, the idea that as numbers increasingly larger than THENNs are added to the sample, the distributions for each digit will get narrower and narrower until converging to a spike at 0 (a.k.a. Dirac’s delta function in the world of signal processing).

Ah bugger, didn’t realise it’s long past bed time :P Time to crash.