## Morality

Today I am going to talk about morality from a scientific perspective. What I am about to say is nothing original, but described well here. But then why write this post at all ? There are two major reasons :

1. I was once thinking about three laws of freedom. Thinking it was trivial, I never bothered to write it down and now I can’t remember the third law no matter how hard I try. It also seems that the thought was original because it seems not to be documented anywhere on the web.
2. This post is also for those who need to see the math to understand anything

So the content I am mentioning below is entirely hypothetical. However, it would be useful to understand the concepts at a theoretical level.

So imagine that someone discovers an instrument that can measure happiness of an individual. This, by itself is a questionable concept. What constitutes happiness ? Is it contentment ? Is it bodily pleasures ? Forget all that and think about it as an abstract ‘something’ that we are all constantly looking for. Furthermore, let us assume that happiness can be added, subtracted, multiplied etc once we bring it to such a system. Now, it would be possible to measure the happiness of all the people in this world. Let it be : ΣHp ( Net Happiness )

We can plot the Net Happiness over time of all the people in this world in the graph as shown:

Here t1 is the moment under consideration and t2 is the instant at which all sentient beings ceases to exist ( Big Crunch ? ). Now assume that t1 is the only point you have free will and based on the decision that you make, the Happiness Graph can change to f(t) or h(t). g(t) is the graph if you have no free will. I argue that, that action which maximizes the net happiness of every sentient being in this world [shown by shaded region for f(t)] is the right action. Every other action is a wrong action. That is, we need to take that action at t1 that maximizes the area under this graph (action that corresponds to h(t)) . That is, the value that needs maximizing is this :

Net ΣHp = $\int_{t1}^{t2} f(t) \, \mathrm{d} t$

I also argue any action that causes this Net Hp value to be lower than the area under g(t) is an evil action. Any action that causes the happiness graph to have an area greater than area under g(t) is a good action.

Implications :

1. Clearly good actions can be wrong actions. For example, assume a scenerio when you learn that the whole world is going to end in a few minutes. You see a beggar on the street looking for a days food. You realize you have about 100 bucks on you. Giving the begger 50 bucks would be a good action. But the only right action is to give him all your 100 bucks (assuming happiness increases linerly with money given).
2. It must be understood that inaction can also be an action. For example, suppose one nihilist has trapped himself in a room where you cannot enter. He has access to a nuclear bomb that if activated will destroy this entire world and kill everyone in this world. Your only option is to kill that person. Here, your inaction will result in the death of everyone in this world. Assuming that if the world goes on there is more happiness to be had for everyone it means you are being evil with your inaction.
3. I have assumed here that one unit of happiness given to one person is same as one unit of happiness given to any other person. For example, one unit of happiness given to one man, woman, black, caucassian, hispanic, aryan, jew, hindu, muslim, christian etc is the same as one unit given to any other.
4. While it is possible for two view-points/philosophies to be equally valid, it is not very probable.
5. Assuming that human beings cannot see all the possibilities ahead, the simplest approach we can take is a greedy approach. For example, it is believed that Helene Hanfstaengl persuaded Adolf Hitler away from suicide after the failure of his first revolution. Afterwards, he went ahead and killed many million people. It is unclear what would have happened, if she had chosen at that point not to do that. However, it may not have made any sense for her not to save someone’s life given the information she had.

## Top-down and bottom-up thinking

So, if the last post was about the details, in this post I will try to explain the concept of top-down and bottom-up thinking.

The smartest people in the world will have the Zoom In and Zoom Out ability which is to understand the nitty-gritty details without missing the big picture. Usually what happens is that either

1. people get focussed on the details ( and gets lost in the details ) or
2. they forget about the details

Both are equally bad and can cause failure. If you miss the Big Picture, it is very much possible that you may miss the impact of external forces that might apply to your system which ultimately results in failure. If you miss the details, it is equally possible that you do not have an accurate understanding of what is possible given the constraints.

Management folks encourage mostly top-down thinking (Big Picture) and engineering schools teach bottom-up thinking (Details oriented). This is one of the reasons for tensions between management and the worker ants in technical companies.

To understand this principle in detail let us take the example of Iridium Communications Company.

The founding company for this firm went bankrupt because it developed the technology for satellites, mobile phones etc, however the cost of putting the satellites in space was in the order of billions which the company was not able to raise. They wasted an inordinate amount of money because they failed to see the external effects of their business plan, the Big Picture.

To explain the concept of attention to detail, take the example of IBM deciding to incentivise developers on number of lines of code they wrote (sorry, can’t explain this without knowledge of some detail and code and technology are details I understand). This caused many problems within IBM as people wrote code that introduced subtle bugs which they rewrote to fix (more lines), they started writing code in a verbose manner and introduced unnecessary complexities etc. What went wrong here ? The root of the problem is that not all lines of code are equal.

When a factory worker has to put in X amount of work to produce N units of stuff, then when he produces 2N units he has put in 2X amount of work. Unlike mechanical jobs (like the above case), this is not true for creative and innovative jobs like software development, journalism etc.

So you can see that the best decisions are made by people who can refrain from getting caught up in the details, take a step back and look at the big picture from multiple perspectives and at the same time see how the details fit together to form the whole system. Further, you will also note how there is considerable overlap between the two types of thinking; meaning it may be easier to explain the above principle to people who think either way.

When you understand the above principle you start to understand the idea behind a lot of quotes that go around the world, like:
“Complex systems that work evolve from simple systems that work”
“The devil is in the details”
“Details often kill initiative, but there have been few successful men who weren’t good at details. Don’t ignore details. Lick them.” — William B. Given

## Error of classification and loosing detail

Today I would first like to begin with error of classification and then generalize it more to error of losing detail. One common mistake people make is the error of classification. The error is so:

1. All elements of class X has Y and Z property.
2. All elements in the world with Y property must belong to class X and hence have Z property

No other post explains this idea better than this one : http://lesswrong.com/lw/e95/the_noncentral_fallacy_the_worst_argument_in_the/

However that post is too specific to errors of classification with respect to logical arguments. It applies very well to other domains. Moreover errors of classification has bigger problems because the classification itself can be wrong.

I will give you some examples:

Everyone knows that a smart person will talk well about what they are good at, barring artificial constraints such as language or mental illness. But we incorrectly assume that everyone who talks smooth is smart which in retrospect looks ridiculous.

I have also heard of people claiming their employers trying to build slides and provide bean bags for them because companies like Yahoo, Facebook, Google etc seems to do them. After all bean bags and slides are what makes these company cultures so great, isn’t it ?

Another example is exams. Here is where the problem of loosing detail can really be problematic. Most people who are good at understanding the basics of what they study can score reasonably well in exams. However, everyone who scores well does not have to be a good student. They don’t even have to be very smart. It is possible to tune the your study habits specifically to beat the system.

This creates a negative incentive system. If you consider plaudits from other people as an incentive ( which in most cases is a reasonable assumption for most mortals ) then there will be a lot of people trying to tune their study habits to beat the system than do what is profitable to the whole world, i.e, students who go into schools should try to gain knowledge and improve their analytic skills.

This would be an interesting problem to solve : to create a culture that values detail and makes correct classifications.

## Protect your privacy

Recently I realized that Google had changed their privacy policy. In response I set out to see what info they had on me. I was horrified to learn that they had correlated my data and built a profile on me!! So I was testing out some apps and decided to make a video on how you can protect your privacy on the web. Check it out :

• Install openWRT on your router

First, you need to flash the router’s firmware with the latest version of openWRT

• Now install support for USB devices.

You need to enable support for USB. I also had to install iso and cp437 modules to make it work. I used FAT filesystem.

• Now mount it and add the directories

mkdir -p  /media/usb
mount /dev/sda1 /media/usb
mkdir -p /media/usb/router/packages

• Edit package manager to add USB as a destination

Add the following line to /etc/opkg.conf

dest usb /media/usb/router/packages

• Now install lighttpd, PHP-CGI, ctorrent to USB and configure symbolic links

For example to intstall lighttpd to usb use the following command

opkg update
opkg -dest usb install lighttpd

When you install the binaries and try to run them it will complain that it failed to load shared libraries. Just find them one by one at /media/usb/router/packages/usr/lib and create a symbolic link to them at /usr/lib.

• Configure lighttpd, PHP-CGI and ctorrent [lighttpd webroot must be /www_1]
• Now extract rapidleech stuff onto /media/usb/router/rapid and fix the bug

Since we have no X11 on our router we need to automate clicking through forms, verifying captchas etc through another mechanism. Rapidleech is a leech script for downloading from a variety of downloaders like Rapidshare, Hotfile, Fileserve etc. It can be found here. Extract it to a folder named /media/usb/router/rapid. Now there is a bug in the script at /media/usb/router/rapid/templates/plugmod/header.php

<!--?php // You can do some initialization for the template here @date_default_timezone_set(date_default_timezone_get()); ?-->


These geniuses realized that this is a function that can throw an error and decided to suppress that error message. Which means after configuring the rapidleech for the first time, you will get a blank page the first time you visit the page. Just comment the line which sets the default time zone and the site will start working normally.

Now create a symbolic link to this folder on lighttpd webroot

ln -s /media/usb/router/rapid /www_1/rapid

• Now copy the ssh generated key file to /media/usb/router/ folder

My ISP does not provide me with a public IP address. It gives me an IP address 192.168.x.x and forces me behind their router. At work I am obviously behind some router of the company. This gives rise to an interesting problem where both the machines have to initiate the connection since they are behind routers. Neither system can directly connect to the other. The solution I have found is to tunnel the connection through an Amazon EC2 instance. For this you may need to enable incoming connections on port 2500 besides the default 22. Since the default key provided by Amazon is not accepted by dropbear and you need a new key. Copy the id_rsa key to the folder : /media/usb/router/scripts

• Now compile and copy spectranet binary to /media/usb/router/packages/usr/bin

My ISP also has the annoying habbit to log me out every 30 minutes or so. I had to understand how the log in mechanism works and write a program to automatically log me in. If anyone else is trying this they might need to do a different set of steps to log in to their ISPs. I also added the startup stuff for lighttpd and to open ssh tunnels to this program so that I dont have to manage different things. The code I am publishing here:

You need to build openWRT to obatain the toolchain to cross compile to MIPS. After compilation the tool chain will be available at openwrt/trunk/build_dir/target-mips_r2_uClibc-0.9.32/OpenWrt-Toolchain-ar71xx-for-mips_r2-gcc-4.5-linaro_uClibc-0.9.32/toolchain-mips_r2_gcc-4.5-linaro_uClibc-0.9.32

After compiling the binary you need to copy it to /usr/bin/ directory of the router.

• Now write a startup script for running the /media/usb/router/packages/usr/bin/spectranet if it exists on startup and enable it.

 #!/bin/sh /etc/rc.common # Copyright (c) 2012 Joji Antony # All rights reserved # Licence Affero GPL V3 START=15 STOP=15 start() { mount /dev/sda1 /media/usb if [ -d /media/usb/router/packages ] then export HOME=/root /usr/bin/spectranet & fi } stop() { umount /media/usb }

Enable it:
root@OpenWrt:~# /etc/init.d/download enable

• Set secret key in /media/usb/router/rapid/configs/accounts.php and create /root/.profile

This is needed for rapidleech to work properly and setting PATH variables.

Add the following line to /root/.profile
export PATH=$PATH:/media/usb/router/packages/usr/bin:/media/usb/router/packages/usr/sbin Now you are ready. Just switch on the router and it automatically becomes your download box.  simula67@prometheus:~$ ssh -i /home/simula67/Temp/AMAZON\ KEYS/aws_key.pem ec2-user@ec2-12-110-209-183.compute-1.amazonaws.com Last login: Fri Jan 13 17:51:57 2012 from 180.151.42.251 __|  __|_  ) _|  (     /   Amazon Linux AMI ___|\___|___| See /usr/share/doc/system-release/ for latest release notes. 27 package(s) needed for security, out of 44 available [ec2-user@ip-41-196-83-37 ~]\$ ssh root@localhost -p 10000 root@localhost's password: BusyBox v1.19.3 (2011-12-31 21:23:18 MST) built-in shell (ash) Enter 'help' for a list of built-in commands. _______                     ________        __ |       |.-----.-----.-----.|  |  |  |.----.|  |_ |   -   ||  _  |  -__|     ||  |  |  ||   _||   _| |_______||   __|_____|__|__||________||__|  |____| |__| W I R E L E S S   F R E E D O M ATTITUDE ADJUSTMENT (bleeding edge, r29631) ---------- * 1/4 oz Vodka      Pour all ingredients into mixing * 1/4 oz Gin        tin with ice, strain into glass. * 1/4 oz Amaretto * 1/4 oz Triple sec * 1/4 oz Peach schnapps * 1/4 oz Sour mix * 1 splash Cranberry juice ----------------------------------------------------- root@OpenWrt:~# cd /media/usb root@OpenWrt:~# ctorrent -e 1 -C 20 -dd KunfuPanda.torrent

NOTE1 : Some terminal output shown here are to illustrate the concepts only.
NOTE2: Some trackers refuse connection from ctorrent.  Use -A option to change user agent string and/or use different trackers ( use ctorrent -x the_torrent_file.torrent to find different trackers and use ctorrent -u “tracker url” to change the tracker)

## Why we probably won’t have sentient computers

This is actually going to be a very Computer Sciency post. I will try to explain the basic Computer Science principles I learned in college as clearly as I can, but if you are reading this and did not understand something, please leave a comment and I will try and explain it better.

There has been a lot of debate about why Artificial Intelligence is bad because (insert any ‘Matrix’ type of apocalypse here). Do we have reasons to believe that sentience or consciousness is actually more than Turing Complete?

Let us take the Problem of  Undecidability of  Turing Machines ( I don’t remember exactly what it was called). This is a wonderful problem which was proved by Alan Turing that a solution does not exist. The Problem of  Undecidability of  Turing Machine states that it is impossible for a Turing Machine to state that whether or not another Turing Machine will produce a particular output or not. Please keep in mind that a Turing Machine is a computer (or more precisely a computer program). How is this? Modern day computers (i.e hardware) are usually general purpose. This means that modern computers are designed in a way to allow any Turing Machine to be built out of them using software. When the hardware and software combine together a Turing Machine is born. This is a machine that is actually capable of information processing (computing). Any machine capable of information processing is usually called a computer. The reason we use a general purpose hardware and special purpose software is basically cost and manageability. It would be theoretically possible to build a hardware that can let you play Need For Speed. But the complexity and cost of such a machine would be huge. The advantage of doing things this way is that the hardware only needs to be researched once and then sold in billions of units. Thus the research cost for general purpose computers are less per unit(which is why some scientists are making super computers using PlayStation 3 rather than building the entire thing from scratch. Companies like Google, Facebook etc also favors cheap commodity hardware for this reason [and many other]).

The proof for the Problem of Undecidability of  Turing Machines is a proof by contradiction. It goes like this :

(Please keep in mind that when I refer to Turing Machines, I essentially mean software programs)

Suppose there is a Turing Machine H1 which can take Turing Machine H2 and an input for H2 ( say ) as input and produce an output (by printing onto the screen) “Yes” or “No” depending on whether or not H2 gives a particular output (say it prints “Hello World”) when given the input I.

i.e

Program H1 :

if( H2 with input I produces output ‘Hello World’)

print “Yes”

else

print “No”

Now, let us modify H1 to print ‘Yes’ if H2 prints “Hello World” and prints ‘Hello World’ if H2 does not print ‘Hello World’ on input I.

Hence final (hypothetical) code of H1 is :

if( H2 with input I produces output ‘Hello World’)

print “Yes”

else

print “Hello world”

Now let us make a new machine H3 such that H3 only takes H2 as input and simulates H1 with both inputs as H2, i.e it makes a copy of H2 and gives it as the two inputs for H1.

Program H3 is :

Final states of the Turing Machines H1 and H3

Simulate H1 and pass as inputs H2 and H2[ i.e a function call : H1(H2,H2) ]

Now, comes the twist : let us feed H3 as input to H3. What output should come from H3 (simulated H1)?

If it prints “Yes” that means that H1 is saying that when H3 is given as input to itself it should print ‘Hello World’. but that is exactly what we did [give H3 as input to itself] and it printed “Yes”, not ‘Hello World’. If it prints ‘Hello World’ that means that we just gave H3 as input to H3 and it printed ‘Hello World’, which means that H1 should have given us the output “Yes”. Hence in both cases we see a fallacy. Hence we say that all these machines cannot exist.

Hence we see that it is impossible for a Turing Machine to say whether or not a particular program produces a particular output (which is why software testing can never be completely automated. Ever) . But it is my feeling that if given a long enough time span, human beings can do that (this is unsubstantiated, I know, but I feel that way). If the program length is like a billion lines, it might be impossible for a human being to decipher it all in a lifetime and hence my hypothesis might be too hard to prove. But I seriously think that given enough time our brain can make a statement on whether or not a program will produce a particular output. Thus we are somehow different from computers (Turing Machines) and we will always be.

Three things:

1. Please do not think that this somehow implies there is a God or something. This might just mean that we are Turing Machines + some more capabilities, just as Turing Machines are PushDown Automata with some more extra sugar. But this also leads to interesting conclusions like if we assume every physical or chemical reaction in the world can be simulated on a computer, consciousness is not a sum total of some physical, chemical or electromagnetic reactions in the brain.

2. This also means that I do not believe a human brain can be completely ever simulated in a computer because if it can be, the underlying machine will break free of the limitations of the Turing Machine (unless of course someone can prove that human brain cannot say whether a particular program will produce a particular output or not).

3. Thanks to John E. Hopcroft, Rajeev Motwani and Jeffrey D. Ullman for their book  ” Introduction to Automata Theory, Languages and Computation“.

## Today’s exploits

I have been wandering through the web today and loads of things caught my attention. So I thought today I would share some of them with you.

I have been looking through the HTML5 specification and it looks like its shaping up well. Loads of buzz is going around the video tag and support for WebM and H.264 using the tag etc that nobody bothered to point out some of the cool features of HTML5. I am not going to list them all out here. You can look them up in html5rocks.com.

Some of these features such as (web sockets for full duplex communication) are long overdue. Others seem likely to be adopted widely  in future. A negative aspect of this feature rich but complicated design is that aspiring web developers will find it harder to master HTML whose simplicity is the foundation of its success. Even kids could learn it and develop web sites which really helped the web grow and become the platform of choice for the expression of ones views. More complicated we make it, the higher we set the entry barrier for new guys which could have a detrimental effect on the quality of the web. Another worrying aspect is the rendering time it would take to view web pages on a browser. I hope that too many features will not kill user’s seamless web experience.

But after a while I am sure we would be wondering how we could have lived without HTML5!

Lately, I have been hearing a lot of talk about Google’s world domination plan. I wonder how they are going to do that. Are they going to do that with their Free Software browser Chromium ( or the Google branded version : Google Chrome ) ? Or with their messaging platforms such as Gmail? Or their Free Software mobile OS Android? Or could they be doing it by, OH NO, GOOGLE SEARCH? All these platforms do not lock you into doing something. You can very easily search on Bing  if you dont like Google. You can install a custom android ROM on your device if you dont like Google Android. Even after doing all the work Google is still left to charm you into using their products. If it was Microsoft or Apple, they would have had you by the balls now.

I think that for most people the fact that their computing experience sucks or the fact that they are being cheated and controlled by the High-Tech companies don’t matter. To them issues like privacy are more grievious. I am not saying that privacy is a trivial issue, but I think that if you work hard, it is a problem you can fix yourself. But if a company sells you crappy software and forces you to use it, there is nothing more you can do; which I think makes it a greater crime. So whence, I do think Google should be more concerned about the privacy issues, at this point in time I am happy that they are around.

Besides, if you want all the cool features like navigation assistance, threaded conversation view in mail etc it is natural that a company which can produce all that gets a huge market share. However, the danger is when they turn evil. So far Google has no record of handing over your data to some third party or assosciating your search against your name. Sergey Brin once said that if government wants to get your data, it would be better off for them to get it from the ISPs than from Google data centers. Indeed countries like UK has laws that state all ISPs must log all their network traffic for 6 months. It would be better to get the raw data from there than to get data which is optimized for search analytics.

Also did you use Yahoo!’s Search Direct? Yahoo! is saying that people are no longer looking for links when they search, they are looking for answers. And they are right. Sometimes I wish I could sue Google.

I would also like to leave you with the Think Quarterly magazine from Google. The magazine’s online version deserves props for the simplistic and yet elegant design. That is one e-zine I am looking forward to. Those chaps at Google can design some cool web sites.

Bastards.

## The Google – Bing Debacle

If you didn’t know already, recently stars are falling from heaven from the war between Google and Bing because Bing copied Google’s search results. The public propoganda from Microsoft has been so full of sh*t that I thought I could point out some of the mistakes that they were making. To think that these people make the Operating Systems that most people use daily scares the living hell out of me!

The biggest crap they spill out is how the process Google used to prove the thing was a “Sting Operation” and describing the process as “Click Fraud” and “Honeypot attacks”.

Mother of God! Hoenypot “attacks”? HONEYPOT ATTACKS?!?! How are honeypot operations ‘attacks’? They just put stuff on to their own servers and and just looked if Bing would copy the crap results. They caught Bing red handed coying the fake results. Damn! Its like Microsoft is saying : “Mommy! Google caught us!! That was soooo BAD from Google’s part”. What are they accusing Google of? Being too smart? I bet Microsoft never thought Google could catch them do it.

Another crap is the phrase “Copying goes both ways”

Before Google there were search engines. But it was Google’s idea to sort the search results in the decreasing order of relevance. When Yahoo! and other companies went with the portal approach, Google said “We will do search. It is a hard enough problem”. Have you ever fantasized how hard the problem of search is? When you enter a search query, the search engine has to search through the 12 billion web pages to find the ones that you are looking for and order them according to relevance, all under 300milliseconds. Google deals with 200 million search queries a second!! So you could say the modern Search Engine itself was Google’s idea. Which would make Bing itself a Google rip  off! Even the “Did you Bing today?” tagline of Bing was Microsoft’s idea of making Bing as a verb and cash in on the brand name like Google. Taking someone’s idea, implementing it and building on its innovation is different from scraping off someone’s search results. Take a look at how Google tries to scale its infrastructure.

Microsoft also claims that what they collect only as “clickstream data”

They are not taking the behavioural pattern of the user. That is not where the value is derived from. Even if the customers supply the results, Bing should not use it, because the search results were produced by Google, not the customers.

Also, Microsft  claims there are 1000 signals that Bing listens to for computing the results page. Clickstream data is only one of them.

I wonder what the other 999 signals are for, if they are just ripping of Google’s results. My close friend put it nicely : “It just goes to show, no matter what Microsoft does, Google is always bettter”.

Yay. Every time Google does something Bing takes it as a compliment. I dont understand how they do it. It must be some sort of talent. Of course, Bing would come eventually to matter if they are ripping of Google’s search results. Google is a fairly good search engine.

The bottom line is that stealing is just stealing. Google is worried because, someone is stealing their search results. That is understandable. If Google falls because of this, everyone would be stuck with Bing, which is how Microsoft usually does stuff. But Bing would have no one to scrape the results from and everyone’s web experience would suffer. Do the folks at Microsoft think they are making the world a better place?

PS : More PR Bullshit from Microsoft

IE8 vs Chrome vs Firefox

http://www.microsoft.com/windows/internet-explorer/compare/default.aspx

## Package Management System

Most Windows users are baffled at the “complexity” of installing software in Linux based Operating Systems such as Ubuntu, Fedora etc. This post tries to tell you why this perceived difficulty exists and why the system is different (and better) in most Linux based Operating Systems.

Due to free nature of GNU/Linux operating system most programs (or software), when they require a specific function, may call that function from a pre-written library. Think of a library as a collection of some specific functions which may be required by many pieces of software. For example, a multimedia library may be required by your Media Player (such as VLC, Windows Media Player etc) Video Converters (such as Total Video Converter) etc. These are called dependencies of the program. These libraries may have dependencies themselves making a list of large number of dependencies which must be present in the system before the said program can be run. Thus to solve this problem the Package Management system was born. The Package Management system maintains a list of dependencies for each program and when the user ask to install a particular program the Package Manager automatically installs the dependencies before installing the application so that everything works out smoothly.

The problem with this is that most users find it difficult to install software in GNU/Linux because they are used to the Windows environment where you had to download an “exe” or “msi” file, double click and click “Next” a couple of times and Voila!, the program was ready to run. This is because Microsoft’s philosophy was a “mine” philosophy which relied on computers standing isolated and running applications that meets the users requirement. We all know Microsoft was a little slow to the “Network is the Next Step In Computing” manthra. Also since Microsoft does not believe in helping any other company with their products, they would never incorporate a package management system on the Windows platform (although this strategy of theirs is changing with the advent of Windows 8). But the Package Management system in GNU/Linux operating systems have its advantages :

• All programs that depend on a library are updated when the library is updated. For example let there be a library function that generates a random number. Suppose that this function is used by a large number of software in an intermediate step. When the algorithm to generate the random number is improved all programs that use this library automatically runs faster.
• Memory is used more efficiently since many applications do not need to load code that do the same thing multiple times into memory. This is perhaps why GNU/Linux systems have a smaller memory footprint as compared to Windows systems.
• With a single click or command you can update the entire software collection on the computer to the tested and stable new versions. This is because the Package Manager maintains a database of all the installed applications and the administrator who maintains the repositories release only tested software into the software repositories.

There are popular tools that can be used to make the whole process offline. But as they are free software it is harder to promote them, which is why it is important to have a community relationship when working inside free software. With little effort Package Management system can be a blessing for any former Windows user.

## Finally, Livejournal

Livejournal seems to be exactly what I was looking for. In reality, I didn’t know what I was looking for exactly until I saw Livejournal. I can’t sum up the enregy to write up large blog posts centered around some vague idea. God knows I do enough of that in exams.What I want is to send quick blogs to a server online where I can send my thoughts in short snippets and it will stay there, for all the world to see.

Yeah, I like the idea.