AlexJ's Computer Science Journal

LinuxCon Europe 2015 – On stage (part 2)

[Article is part of the series on LinuxCon Europe 2015]

The evening of the first days contained another set of keynotes. They started with the Kernel Developer Panel. Aside from the chat with Linus, this is the most interesting part of the LinuxCon, in my opinion. The panel is formed of a couple of kernel maintainers, with Greg Kroah-Hartman the most known and the usual presence. It was rather short for the potential it had and covering questions from the conference attendees.

The first question was if the kernel still needs more developers and the clear answer was “YES!”. Linux is the biggest collaborative project in the world, but it still needs people to keep it going. As a reply-question, one of the panelist raised the question if the current structure can manage more and more developers. The consensus was that even though it’s not a big problem now, it could use some improvement with focus on better communication.

Expanding on that, does the Linux kernel need more maintainers? This question sounded familiar, since I’ve heard it before at the Kernel Panel at LinuxCon 2013 in Edinbough. Maintainers are needed and they are hard to get. They represent an important role in the kernel’s development and it’s not the most glamorous job. Maintainers not only require the dev skills and the knowledge for that specific subsystem but they also need to read a lot of emails, give feedback and actually merge patches. Again, communication is a keyword and everyone agrees that more needs to be done to attract people to this position. There was the question if some subsystems have maintainers that don’t want to pass responsibility to other (new) people even if they can’t keep up themselves. But it looks like most of the times this is not the case since it’s generally accepted that people understand the peer-to-peer relation within the project and that it is a meritocracy so if people prove themselves worthy, they can become maintainers if they want.

Another topic is testing and continuous integration within the Linux kernel. Greg was optimistic about it, and said things are pretty ok. It’s not an easy task, but several automation tools have been put in place to help things out. Tools like Coccinelle (since Julia Lawall, the developer of it was on the panel) was given as an example of things that make patches more reliable to merge.

Following the Kernel Panel, a representative from SuSe came on stage for a presentation about DevOps. DevOps is another of those buzzwords that you’ve been hearing lately. Most wrongly consider it being a position which implies someone that does both the job of a Developer and the jobs of a Sysadmin. In fact, DevOps is more of a workflow in which both Developers and Sysadmins are aware of the other one’s needs and expectations. This presentation focused on the idea of taking DevOps in a multi-vendor environments and sharing the DevOps workflow across companies. Not an easy task and maybe not something you would want to do, but the guys from SuSe told the story how they do it, with the help of a tool called OpenQA.

The final presentation of the day was about patents. The CEO of Open Invention Network (with whom I talked earlier in the day at the OIN booth) came to talk about what his organization has been doing for the last 10 years. They deal with something geeks don’t really like interacting with: legal stuff. OIN has been working with companies like Google, Red Hat and SuSe (who are also the sponsors) to create shareable pool of patents which involves Open Source technologies. Yes, most of us don’t like patents but they do exist and they’ll be around for a while. Someone needs to take care of the dirty work and both buy needed patents and defend against patent trolls.

The first day was rather packed with interesting things to do and see. It ended with Jim toasting a pint of Guinness with the guy from OIN, after which were were all treated to one in the lobby.

Coming up, the chat with Linus and a day about containers.

LinuxCon Europe 2015 – On stage (part 1)

[Article is part of the series on LinuxCon Europe 2015]

Coming in early in the morning to the Dublin Convention Center (which looks like an awesome venue) and seeing the flood of people at the registration (which I avoided by being on time) I can tell that it’s going to be a big event. After the morning tea and before the opening I did the scouting to see the main conference room to see what companies have booths at the conference. Glad to see a large number of them and will return to them after they finish setting up.

The keynote series was introduced, as usual, but Jim Zemlin. He is a very interesting “CEO” (the Executive Director of The Linux Foundation) and I enjoy seeing him talk about Linux and the Linux Foundation. The first day of the conference matched Linux’s 24 birthday so we started with a happy birthday cheer. Well, to be fair, it’s hard to pin point an exact birthday (there is also Linus’s announcement on the Minix list on the 25th of August and the release of version 0.01 on the 17th of September) but the organizers were referring to the first public version, v0.02 on the 5th of October 1991.  Also this week, is Free Software Foundation’s 30th birthday so greetings and thanks to them too.

Jim also announced new Linux Foundation projects, like FOSSology, a framework that scans for software licenses on projects to ensure license compliance.

Jim will remain the master of ceremonies throughout the conference.

The first actual keynote presentation was very interesting one, worthy of a TED presentation, under the catchy title “Man vs. Machine: High Frequency Trading and the Rise of the Algorithm”. The topic was basically Artificial Intelligence, but not the things that we are used to hearing for the last 20 years because of one reason: the AI we are used to think of as futuristic, is not only here, but has been here for a while and affects our lives daily. Center stage, stock exchange algorithms which are in control of the world financial markets. These came to be because of the limits of the human processing capabilities compared to the need for increased speeds in decision making. It’s both amazing and scary what that they currently do, but it’s important to see the limitation of them. And one of the focus is the things the algorithms can’t do yet (like “read” the news) but also what humans can’t do (respond fast) and why the human-AI interface is the next step in development of the world.

Next presentation was from IBM. I found out that the Power architecture was still alive  and what IBM is doing for the Linux world. But it was an old school marketing presentation (though the presenter knew how to do her job well) and this is why IBM is the Diamond Partner of the conference. In short it was an advertisement for a line of Linux focused servers powered by..well… OpenPower. But the presentation was rather confusing and I still didn’t understand what was so open about OpenPower. I had to go to the IBM booth after the presentation to talk to some engineers to find out (stay tuned for another part of the article).

Last of the keynotes was about Drones. A popular geek toy these days but people usually are accustom to very simple drones which are just remote control flying cameras. However, the immediate future of them are actually autonomous drones. One of the presenters was from Willow Garage, a startup from Menlo Park that I actually interacted with about 4 years ago (I went on a volunteer study that involved testing one of their robots). They try to actually advance the world of robots in small increments. For example, they presented drones that could understand the environment and navigate in their surroundings independently. Slowly but surely, SciFi looking things are coming to the real world.

That marked the end of the first series of keynotes and the start of the about 12 parallel presentations in the smaller conference rooms.

The first such presentation that I’ve been to turned out to be awesome. I attended after a colleague introduced me to the term bufferbloat, which refers to the nasty latency problems that comes from the usage of buffer combinations in order to do network packet switching/routing. The presentation was titled Bufferbloat 3.0: Recent Advances in Network Queuing” and presented by a cool guy from Brocade. When I walked into the room I saw on the stage an child-size inflatable pool, some plastic pipes and a series of liquid containers. I though to myself that “this presentation is going to be interesting”. And it was. The presenter was as entertaining as he was informative. He used the tubes and liquids to visually represent packet queuing algorithms. From FIFO to RED to combinations of queuing mechanisms stacked like a cake, he presented to pros and cons of them and when to use them and when not. The current state of buffering in the network world is not that great, but they people from the Bufferbloat community are trying to raise awareness of the need to think things though when it comes to handling packets on the Internet. 

Next, I attended a presentation from Red Hat about SR-IOV Support in oVirt. Two years ago at my previous LinuxCon, I really enjoyed the oVirt presentation so I was optimistic. I wasn’t familiar with the term of SR-IOV, which stands for Single Root I/O Virtualization, a PCIe extension that allows a device to appear to be multiple separate physical PCIe devices. I wanted to find out more about the technical details, however, the presentation turned out to be more about oVirt configuration than the SR-IOV itself. Perhaps it would have been more interesting if I wasn’t familiar with oVirt. Still, I got the basic idea, which is that this offers hardware assistance to hypervisors when it comes to NICs.

Since this post is becoming longer than expected, I am going to end here and continue with a second part on the presentations, starting with the Kernel Panel from the evening of the first day.

LinuxCon Europe 2015 – Intro

After two years, I am back in the British Isles at LinuxCon Europe 2015. This year, Dublin is the hosting city, so after a weekend full of Guinness, time to have fun with Linux geeks.
LinuxCon, as the name suggests, is the biggest Linux Conference and it is organized by the Linux Foundation. The one here in Dublin is part of the Linux Con Europe series. I participated two years ago at LinuxCon Europe in Edinburgh and I enjoyed it so much that I am glad I had the chance of repeating it.
The entry ticket was more expensive this year (725$ early booking), however there was a significant discount for hobbyists (people that weren’t sponsored by a company to attend) bringing the cost to 300$. Since I came on my own I went for it.

Technically, the event it combined the actual LinuxCon Europe with Linux Embedded Conference Europe and CloudOpen and also some smaller conference like the UEFI Forum which was free or Yocto Project Developer Day (which needed separate ticket).

The conference itself was held at the state-of-the-art Dublin Convention Center, on the banks of the Liffey river, in front of best looking modern bridge in the city (a suspension bridge in the share of an Irish harp) . The venue was very nice and well suited for a high class event (with the exception of a non functional wireless the main amphitheater when 1000 geeks were online).

From the very start I could tell that it was going to be a big event. The number of participating companies was big (at least compared with the one in Edinburgh). They were from big names like IBM, Intel Samsung and Google to smaller companies either well known like GitHub or less known like Enea, and of course big Open Source names  like Red Hat and SuSe.

It was also very big judging by the schedule. It lasted 3 whole days, which can be considered a long conference. I am used to attending conferences where you have several presentations at the same time (different ‘tracks’) but this was one had up to 13 happening at the same time. Not to mention that there was always some activity on the big floor of the conference where the company booths were.

The big topics discussed were around (unsurprisingly) the popular 2015 buzzwords of “containers” and “Internet of things”. Other keywords/products that appeared in titles were security, OpenStack, embedded, UEFI, drones.

Rather than using my usual style of describing the event chronologically by splitting the story into posts per day, I am going to split it into two main parts (not including this introduction). In the first I am going to summarize what happened at the presentations (the keynotes and some of the small parallel presentation sessions that I attended) and what happened in the common area where the company booths were (which was, at times, more interesting).

I plan on following up the article in the following days.

Thoughts on the presentations: [part 1]

State of Mobile Operating Systems

… and thoughts about their future.

When we think of Mobile Operating Systems we mostly think of Android and iOS which have the vast majority of the market share. I am a big (openly) Android fan for many reasons. Though it’s not perfect (for example, I would like the development model to be a little more community based than it is now) it is the one I will stick with for short term, at least. As for Apple’s iOS, I do them give some credit for some of the things they did from a tech point of view, but because of the way Apple does things, I can’t approve of them. But I do recognize their dominance on some important markets (like US).
It’s not perfect that these two have almost all of the market share, but at least it’s a better balance than what we have on the desktop and laptop market due to Windows.

But let’s talk about other operating systems. And, unfortunately, I have to start with Microsoft’s Windows Mobile. Fortunately, I’ll stop quick because even they realized that they failed in this market. So, after making a big push (see the Nokia destruct…er.. acquisition) they toned things down in the last months. However, I do expect them do be around and have a reasonable share just because of their name and influence. Which is not a bad thing, because they might still bring new things to the industry.

Now to come to some OS-s that are very promising. Starting with Mozilla’s FirefoxOS. As a Firefox fan, I watched FirefoxOS closely and I like the way things are going. They are ‘the underdog’ and seem do be doing a good job so far. Mozilla managed to get FirefoxOS on hardware through partnerships with local manufacturers. And they chose a good market, the low cost devices, where I think they can have a very big impact. It’s the same market that Google recently wanted to push for with Android One, so Mozilla was on to something.
FirefoxOS, of course, pushes a lot on Web technologies (like HTML5). Which is good because 1) they are good at that and 2) they are not building an Android clone. Like the others OS-s we will be discussing, FirefoxOS is based on Linux, but rebuilds almost everything that Android has (though it does share some very low level things like the HAL).
Those things, combined with the fact that they will have the support of many Open Source enthusiasts, makes me hopeful about FirefoxOS having its place on the mobile market in the near future.
As a personal note, I would like to see FirefoxOS on something else other than just (smart)phones. The first obvious things would be the tablet. But I would much more like to see a Chromebook competitor based on FirefoxOS. Firefox still is an awesome browser and combining that with Google’s great idea of building a browser-oriented OS could be something worth having. There is currently a Kickstarter project to build a Chromecast-like device based on FirefoxOS called Matchstick.

Next, there is Tizen. Tizen has a complicated history, starting out with a merger of small projects from Nokia and Intel, and then combining with projects from Samsung, all while receiving the backing of the Linux Foundation. Currently, Tizen is a project of the Linux Foundation so you would say that it’s even more ‘open source’ friendly than FirefoxOS. But the truth is that the ones pushing it are Samsung and Intel. Which is not necessarily a bad thing and it does make sense.

Tizen is much closer to Android than FirefoxOS is, but still build from scratch on top of Linux, doing the same things Android does but different. And the project comes more out of the need to have a viable alternative to Android. Samsung is probably the biggest Android device manufacturer but knows that it’s dependent on Google for Android. And Samsung also knows that it has the power to create that alternative. It’s also in Intel’s interest to push into the ARM-ruled mobile market with its chips and backing up Tizen is a good move (though Intel also is one of the big supporters of Android too).
I had heard of Tizen for some time, but my real interaction with them was at LinuxCon Europe 2013 where they were one of the main attractions. I expected Samsung to launch several Tizen based products in the last year, but so far they haven’t. So I find the project in a strange state. They have the potential of launching something interesting and the position on the market to push Tizen close to Android and iOS, but they seem to be on standby. So either they are waiting for a better time to start the push or they are having doubts about it [citation needed], which would be sad.
So, for now, Tizen is also not touching mid and high end phones (where I think it has its place) and only limited to some low end phones. But it has been launched on Samsung Gear, which is the new hype in mobile devices.

One of Tizen’s cousins is also a potential big player in the mobile world: SailfishOS. Both it and the company behind it, Jolla, have their roots at Nokia. After Nokia decided to go with Windows Mobile, some people from Nokia left and started Jolla in an effort to reinvent Nokia’s vision. And the result was SalilfishOS and the Jolla Smartphone powered by that OS. SailfishOS is Linux based, using the open source mer component but a proprietary UI.

And that UI is what sets this OS apart from the rest. They tried to rethink the mobile interface like Apple and Google did a few years ago. Having recently held one such phone in my hand, I can say that the result is interesting so they may be on to something. But they are still one of the new additions to the mobile OS list so it’s still pretty soon to tell.
However, seeing how Nokia will be soon back in the smartphone market, we might expect them joining the team back and releasing Jolla’s product as a Nokia branded phone (I am just hypothesizing / wishful thinking). Nokia is currently lagging behind because of the Microsoft fiasco, but they too are focusing on UI as seen from the N1 tablet which is Android based but with a custom UI.

Another contender is the Ubuntu Phone by Canonical. Canonical has made a name for its self due to the Ubuntu distribution for desktops and laptops. Though they alienated some of their users when they made the decision to move to Unity for their interface, we since then found out that Unity was their future bet. More specifically, come up with an interface that will provide the same look and field across platforms, from desktops and laptops to tables and phones.
Canonical’s goal is to provide a unified experience and that is going to be their selling point for the mobile world. That, along with some other things from Apple they try to put into their products makes them more of a competitor for the iOS market (though, they still are far away from that).
It’s hard to tell if they are going to make it since Microsoft did exactly the same thing with the move to Metro. And had the same results: pushing a mobile interface to a desktop platform makes desktop users angry. Also, Microsoft, as stated earlier, have backed down from mobile war, concentrating on desktops where they are still ok and maybe improving their tablet market share.
So Canonical will have a tough battle to fight. Even if they have some new ideas and interesting tech features lined up for their Ubuntu Phone, if they plan to fight against Apple and Microsoft for the users, it’s not going to be easy. Also, because of their record of taking an Apple-style approach of “we know what is better for you” and not being an open source company that listens to the community, it may be that not even open source geeks will stand by them.

All products discussed so far (with the exception of iOS and Windows) are open source (at least partially) and they are all based on Linux. But they all have distinct frameworks (some share bits and pieces). So it’s kind of strange, because you would expect to see many more forks on the market, especially Android forks. They do exists, so let’s also mention them.

An honorable mention is Microsoft/Nokia’s Nokia X which was Android with the Google parts replaced by Microsoft ones. Seems like a match made in hell, but I was kind of glad it existed. Because that was the power of open source: being able to create a similar alternative and bring your own features to the mix. Though it might not have been a happy moment for Google (like Oracle repackaging Red Hat’s Linux distro), it was something nice for Android fans that were also Microsoft fans (I guess they exist). But, alas, the project had a very short life.

Also worth mentioning is Amazon‘s FireOS (not to be confused with FirefoxOS). It has been deployed on popular devices such as Kindle and some not so popular devices such as the Fire Phone and Fire TV. But I think Amazon is still deciding what they actually want to do with their OS.

The one I want to talk about is CyanogenMod, a very popular Android fork. Though I haven’t used it (since I actually like the Google ecosystem), I liked a lot the idea that it exists and was a strong community. Issues that weren’t addressed by Google could be addressed by CyanogenMod. Provide something different yet the same… and that’s open source.
But it seems like CyanogenMod turned into something different when they decided to become a company and launched a rather aggressive campaign against Google (they stated that they want to take Android from Google… which is not exactly how this works). They are having success with the OnePlus partnership, but that seems to be hitting some legal issues at the moment. So it’s a bit unclear where the project is going.
Still, I think that there is a place on the market for strong non-Google Androids (clones) and if not CyanogenMod, then someone else will come along.

So, it seems that it’s an interesting time for mobile operating systems. Plenty of choices and all with their benefits. This is great because we have seen how fast the mobile industry has evolved in such a short time and it looks like things will only get faster. This is in contrast with how slow things have been on the desktop market because of the Wintel monopoly, so it goes to show that a free market is a better market.

I am really curious how FIrefoxOS, Tizen, SailfishOS and Ubuntu will evolve and also I am waiting to see what new OS will come.



Regarding Cyanogen, there are actually two distributions. CyanogenMod is still the community version (like Fedora), while CyanogenOS is the commercial version (like RHEL) shipped by the company Cyanogen.

K, kilo, kibi, bytes, bits and the rest

While writing the previous article, I had again to deal with the confusion caused by the question “What is a KB?”.
Let’s start with some well known facts. First, unless you live in the US (and Myanmar and Liberia), you use the metric system. This means that you have some basic units, like meter, second, newton etc. Just like every basic unit, they are arbitrarily defined. But they are clearly defined and standardized (the International System of Units aka SI). What are not arbitrary (unlike the imperial system) are the multiplicators.

The multiplicators are: kilo (k), mega(M), giga (G), tera(T) etc. You also have the reverse, like mili (m), nano (n) etc, but not important to the discussion. What is not arbitrary about them is the fact that one is related to the other by a factor of 1000. Any factor would have been acceptable, but base 10 makes sense since humans decided that base 10 is the ‘natural system’.

Second, we know that the smallest unit of data is the bit (the binary digit). We also “know” that one byte is 8 bits. Actually the byte had different meanings, until it was standardized by IEC_80000-13 as 8 bits. The name octet is used to avoid confusion. But WE KNOW a byte has 8 bits.

So how much is a gigabyte? Here is where the debate starts. In computer science and engineering, the ‘natural base’ is base 2… usually. So multiples of 1024 is usually more helpful. People started to use the idea of kilo, mega, giga to refer as multiples of 1024. Some say that 1 kilobyte is 1024 bytes. And since words can have different meanings, we could use kilo to mean something else in IT than in the rest of science. End of story.

Only it’s not that simple. Because even in IT, sometimes you need multiples of 1000 and sometimes multiples of 1024.

Memory measurements (like the “size of RAM”) needs to be in multiples of 1024. Because sizes of memory are multiples of powers of 2. And that is because memory addressing is done using powers of 2. So a MegaByte of RAM is 1024 KiloBytes, is 1024*1024 Bytes. This might also extend to storage these days because of flash memory based storage.

But in networking, for example, we don’t have that ‘limitation’. The base unit is the bit and we can transmit how many bits we want. Speeds are measured in bits per second. And measurements can be done with normal SI multipliers. So Gigabit Ethernet is 1 gigabit per second, or 1000 megabits per second, or 1 000 000 kilobits per second. We can divide by 8 and just use bytes. But sill, it’s 0.125 gigabytes per second, is 125 megabytes per second is 125 000 kilobytes per second and it’s 125 000 000 bytes per second.

This is confusing and a solution was needed. And one was found: having another set of multipliers for base 2, called binary prefixes. Thus the terms kibi (Ki), mebi (Mi), gibi (Gi), tebi (Ti) etc. were introduced. And all they do is say that one kibi is 1024 something, one mebi is 1024*1024 something, just like any other prefix. So a kibimeter would just be 1024 meters and one gibigram would be 1073741824 grams. But I don’t imagine using them outside measuring bits and bytes of information.

So that should solve it, right? It should, but it didn’t. Because people still keep using kilobit and kilobyte in a non standard way. And this is a problem and it’s going to be a even bigger problem because while a kilobyte and a kibibyte could be equivalent given a margin or error, but a tebibyte is 109% the value of a terabyte and that is a rather big difference.

The problem exists in consumer market where hardware manufacturers sell RAM cards or 8TB, when they actually sell 8TiB and maybe it would be hard to change the perception of the average buyer because it’s too technical. But it seems technical people are confused too and highly technical manual are also lagging behind when documenting things. Hence, I got to the reason behind this article, which is to point out some problems in man pages of Linux commands.

Let’s start with with the man page for the free utility. By default, free gives the output in kilobytes, as the manual says. But does it? [I’ll snip a small portion of output]

[alexj@ptah ~]$ free -k
Mem: 7712244

Let’s see the other options from the manual:

-b, –bytes
Display the amount of memory in bytes.

-k, –kilo
Display the amount of memory in kilobytes. This is the default.

-m, –mega
Display the amount of memory in megabytes.

-g, –giga
Display the amount of memory in gigabytes.

–tera Display the amount of memory in terabytes.

Let’s test.

[alexj@ptah ~]$ free -b
Mem: 7897337856

[alexj@ptah ~]$ free -m
Mem: 7531

So the values are 7897337856, 7712244 and 7531. Which, after a quick calculation are truncated divisions of 1024. So free, by default uses kibibytes and -m shows mebibytes and -g gibibytes.  The parameters should be -Ki  or –kibi and -Mi or –mebi etc.

But since free is surely in a ton of scripts, you can’t change things know. Only patch things out to be less confusing. Like the following addition:

–si Use power of 1000 not 1024.

Only that this parameter only adds more confusion. “–si” is reference to International system. But both kilo and kibi are in the international system. So the name is just confusing. And using 1000 instead of 1024 doesn’t change the definition of the kibi as the writers of the manual think.

Oh well, at lest in the world of storage things are better. dd is a basic tool for block storage. Here is the section of the man page:

N and BYTES may be followed by the following multiplicative suffixes: c =1, w =2, b =512, kB =1000, K =1024, MB =1000*1000, M =1024*1024, xM =M GB =1000*1000*1000, G =1024*1024*1024, and so on for T, P, E, Z, Y.

It is better… 1kB is 1000 bytes, 1 MB is 1000*1000 bytes. But what about K and M? They should be ‘KiB’ and ‘MiB’. Well, at lest kB, MB, GB is consistent in storage tools (like df, du). Except for one thing…

This is from the man page of du:

 Units are K, M, G, T, P, E, Z, Y (powers of 1024) or KB, MB, … (powers of 1000).

The odd one out here is “KB”. Because “K” does not exist in the SI as a multiplier. This is because K means Kelvin, another basic unit.

So we are back to the initial question? What is a “KB”? A kilobyte is “kB” (lowecase k). And a kibibyte is “KiB” (uppercase k). “KB” Technically doesn’t exist in either base 10 prefixes or the base 2 prefixes. It’s there because people are too lazy to use upper and lower case. So how much is 1KB? Since it’s not officially defined there is no answer. Because if it’s made to look like “MB” and “GB” it should be the equivalent of 1000 bytes. But it’s very often used as the replacement of “KiB” so 1024 bytes.

Confused? Good! Maybe now you understand why we need to use a standard. Which is the following:

Base 10 prefixes are kilo (k), mega (M), giga (G), tera (T).

1 kilo = 1 k = 1000 units

1 mega = 1 M = 1 000 000 units

1 giga = 1 G = 1 000 000 000 units (let’s not start with what a ‘billion’ and ‘milliard’ is)

Base 2 prefixes are: kibi (Ki), mebi (Mi), Gibi (Gi), tebi (Ti).

1 kibi = 1 Ki = 1024 units

1 mebi = 1 Mi = 1024 * 1024 units = 1 048 576 units

1 gibi = 1 Gi = 1024 * 1024 * 1024 units =  1 073 741 824 units

So, let’s recap:

1 kB is 1000 bytes. 1KiB is 1024 bytes or 1.024 kB.

1 MB is 1000 kB or 1 000 000 bytes.

1 MiB is 1024 KiB. It’s also 1 048 576 B.

1 kilobit (kb) is 1000 bits.  1 megabit (Mb) is 1000 * 1000 bits.

1 kibibit (Kib) is 1024 bits and 1 mebibit (Mib) is 1024 * 1024 bits.

So, for sanity reasons, please use the standards.


File magic

Let’s have some fun with files and filesystems. For practical purposes, I am going to use a simple file that I will treat as a pseudo-block device. So think that ‘vdisk’ would be a generic partition on a disk. I am going to format it with an ext4 filesystem and mount it as a loopback mountpoint. The virtual disk will have a size of 1GB.


[root@ptah tmp]# dd if=/dev/zero of=/tmp/vdisk1 bs=1MB count=1000
1000+0 records in
1000+0 records out
1000000000 bytes (1.0 GB) copied, 0.772459 s, 1.3 GB/s

[root@ptah tmp]# mkfs.ext4 /tmp/vdisk1
mke2fs 1.42.9 (28-Dec-2013)
/tmp/vdisk1 is not a block special device.
Proceed anyway? (y,n) y
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
61056 inodes, 244140 blocks
12207 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=251658240
8 block groups
32768 blocks per group, 32768 fragments per group
7632 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

[root@ptah tmp]# mkdir /tmp/vdisk.ext4
[root@ptah tmp]# mount /tmp/vdisk.ext4 /tmp/vdisk1

[root@ptah tmp]# cd /tmp/vdisk.ext4/
[root@ptah vdisk.ext4]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 923M 2.4M 857M 1% /tmp/vdisk.ext4

Now, I want to create a file of a certain size. truncate is a good command of doing that.

[root@ptah vdisk.ext4]# truncate -s 1T huge_file
[root@ptah vdisk.ext4]# ls -lh huge_file
-rw-r–r–. 1 root root 1.0T Mar 7 16:15 huge_file

At this point, you should notice that something is wrong: I just created a file of one TeraByte on a filesystem that has only one GigaByte. Moreover, it seems that the filesystem is still far from full.

[root@ptah vdisk.ext4]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 923M 2.4M 857M 1% /tmp/vdisk.ext4

In fact, it seems that haven’t used any space of the filesystem. To check what is the actual disk usage of the file, I use the du command:

[root@ptah vdisk.ext4]# du -h huge_file
0 huge_file

This confirms that the file isn’t using any space, despite the fact that it has 1TB. How come?

To understand why this happened we need to understand what is a file. A file is actually composed of an inode and 0, one or more data blocks. Note that a file also needs a dentry (directory entry) to exist, but we don’t need to get into that now. An inode is a structure that describes the file having things like file owner, permissions, creation/modification/access times, file size (which is what we care about now) and other things that depend on the specific file system (the name of the file is NOT contained in the inode… that’s why we need a dentry). A data block is a structure where the actual contents of the file are stored. The size of a block depends how the filesystem was formatted (in this example, one block has 4096 bytes). An empty file has zero blocks. But as the file grows, more and more block are allocated.

So a file that has 0 bytes, occupies 0 blocks. A file that has 42 bytes, occupies one block and so does a file that has 1024 or 4096 bytes. If the file has 4097 bytes it will now occupy two blocks (consuming 8192 bytes on disk) and so on. But that means that our 1TB file occupy many blocks (244140625 to be exact). Only it looks like it doesn’t use any. The stats confirms this:

[root@ptah vdisk.ext4]# stat huge_file
File: ‘huge_file’
Size: 1099511627776 Blocks: 0 IO Block: 4096 regular file
Device: 700h/1792d Inode: 12 Links: 1
Access: (0644/-rw-r–r–) Uid: ( 0/ root) Gid: ( 0/ root)

Why is that? It’s because the truncate command just set the file size value in the inode of the file, but it did not actually allocate blocks for the data, because I didn’t write any data into it. So truncate only affects the inode, not the actual data blocks.

If I would actually create a file and write data into, I would find that I can’t really write that much data in the file:

[root@ptah vdisk.ext4]# dd if=/dev/zero of=actual_huge_file bs=1MB count=1000000
dd: error writing ‘actual_huge_file’: No space left on device
949+0 records in
948+0 records out
948240384 bytes (948 MB) copied, 0.910452 s, 1.0 GB/s

Now, let’s try the same thing on a FAT files system.

[root@ptah vdisk.ext4]# dd if=/dev/zero of=/tmp/vdisk2 bs=1MB count=1000
1000+0 records in
1000+0 records out
1000000000 bytes (1.0 GB) copied, 0.923546 s, 1.1 GB/s
[root@ptah vdisk.ext4]# mkfs.vfat /tmp/vdisk2
mkfs.fat 3.0.20 (12 Jun 2013)
[root@ptah vdisk.ext4]# mkdir /tmp/vdisk.vfat/
[root@ptah vdisk.ext4]# mount /tmp/vdisk2 /tmp/vdisk.vfat/
[root@ptah vdisk.ext4]# df -h
Filesystem                                     Size  Used Avail Use% Mounted on
/dev/loop1                                     952M  4.0K  952M   1% /tmp/vdisk.vfat
[root@ptah vdisk.ext4]# truncate -s 1T huge_file
[root@ptah vdisk.ext4]# cd /tmp/vdisk.vfat/
[root@ptah vdisk.vfat]# truncate -s 1T huge_file
truncate: failed to truncate ‘huge_file’ at 1099511627776 bytes: File too large

So the trick doesn’t work. This is because when you change the file size in the FAT inode, you will have to also allocate the blocks to fit that size. Unlike FAT, ext* filesystems, do not need to allocate blocks until they are actually needed.

Some lessons to take away from this:

  • ls will not show the actual file size. This is why ls takes a very small amount of time to get the size of an entire directory. It will only read the information from the inode
  • du will actually calculate the size of the space occupied on disk, but counting the blocks
  • stats will show you how many blocks a file has (how many data blocks are associated with an inode)
  • a file could show to have a smaller size (in the file size inode field) than the actual space on disk, because the block is the unit of allocation
  • truncate command (along with truncate system call) will only set/modify the file size field in the inode
  • depending on the filesystem and its implementation, when the file size is set, it may or may not actually allocate data blocks
  • dd will actually create data blocks because it actually writes data inside the file


Tribute to my colleagues in the Storage and Filesystems team at Red Hat who gave me the idea of writing this.

DevConf 2015 – Part 3: Executive summary

[See part 1 and part 2 for more in depth commentaries]

I wanted the conclusion to be in the form or a list or recommended presentation to watch online on the YouTube Channel of the event. I also wanted to wait for slides to be published, but at the time of writing this, they have yet to be so [UPDATE: some slides were published and I linked them below]. Here are the recommendations by topic. I filtered out the ones that sounded interesting but were not and left the ones I saw onsite or online and would be worth watching.

My impression is that the theme of the conference was “Docker Docker Docker”, as were a lot of other conferences and events in the last year. So, start with the Docker presentations, then go to the projects about lightweight container images, like Project Atomic.

Fedora (and CentOS) Atomic [YouTube]

Docker Security [YouTube]

Probably the hot topic: Super Privileged Containers (SPC) [YouTube] [slides]

Then, take a look a the Kubernetes (the orchestration software for Docker) presentation and workshop:

What is kubernetes? [YouTube]
Kubernetes: launching your first application [YouTube]

After, the PaaS framework on top of Kubernetes, OpenShift, so you get what’s the deal with the trio Docket-Kubernetes-OpenShift.v3 and where do atomic hosts fit into this.

OpenShift 3 – The future of _aaS [YouTube]

OpenShift v3: Docker, Kubernetes & the power of Cross Community Collaboration [YouTube]

A category of it’s on, but fun presentation about tips and tricks for sysadmins:

Quick Hacks for DevOps [YouTube]

I would recommend the presentation about new Fedora Server to make your own opinion, along with a presentation about Cockpit, one of the features that Fedora Server is trying to market.

Fedora Server – Getting back to our roots [YouTube] [slides]

Cockpit: Modern Server User Interface [YouTube]

And last, but not least, I would recommend the CephFS, presentation so you get a feel of another paradigm for filesystems.

Ceph FS development update [YouTube] [slides]


Though it wasn’t the most interesting conference I’ve been to, it was really worth going. Lots of interesting presentation (actually more than the number of slots). And I did learn about some new technologies and had fun at some presentation.
It’s pretty clear that the Red Hat’s strategy (no inside information) is virtualization, more specific, container virtualization. No surprises there. But what’s now visible are the technologies and products that will carry Red Hat in that market (successfully, I hope).
So, DevConf 2015 was fun and I look forward to going next year too.

DevConf 2015 – Part 2

[see part 1 first]

Day 2

I went early in the morning for a workshop about The guts of a modern NIC driver and bonding internals. It was very interesting (not for beginners but also not that exclusive if you had a minimum exposure to Linux device drivers and Linux networking). We got to compare some code for veth and virtio drivers. Also there was a discussion about architecture of bonding and bridging in the Linux kernel. Nothing mind blowing, but it wasn’t something I see everyday.

After, I went to a presentation about Vitalization on secondary architectures, this meaning non-{x86, x64, ARM}. Unfortunately it was less about virtualization and more about hardware so I was out of the loop. And I regret not going to another presentation, about the sos report.

The SOS report presentation (which I viewed online later) was both on an interesting topic and delivered in a funny and geeky way by the presented. sosreport, as I had discovered a couple of weeks earlier, is a project/tool that gathers diagnostic information about a system that could be sent to a 3rd party in order to analyze an find the cause of a crash or malfunction.

An interesting presentation was one about Quick Hacks for DevOps. The title says it all: some tips and tricks for DevOps. I would say they are tricks for any sysadmin, but mostly useful for people who administer small to medium clusters. And there were some useful tips for any Linux user at the beginning. I am going to add to my shell the tip about coloring your prompt red if the the previous process returned with an error (simple and useful).

Next, I was at a workshop about Kubernetes. I didn’t go to the presentation about it a day earlier, which would have given me the needed introduction, but this workshop it was educational none the less. I heard about Google’s Kubernetes project a week earlier. It’s Google’s way of managing Docker. It builds on the idea of containers to organize them into ‘pods’. These pods are are a group of containers that work with each other to provide a service (example, combining web server pod with data base pods to provide a web service). Kubernetes is a Go-based interface to manage these pods which are run on ‘minions’ (the host hypervisors).

After that I went to a workshop about OpenShift, another technology that I learned about a week earlier. OpenShift is an older project (currently at v2), but the version 3 of it was rewritten on top of Kubernetes. While v2 is written in Ruby, v3 is rewritten in Go, under the name Origin.

At first, it seemed that the OpenShift and Kubernetes do the same thing, but after a while (and some direct questions to the presenters), I got the idea: Docker is the container technology that does the image management, Kubernetes is the management over Docker to provided an Infrastructure and OpenShift is framework for a Platform as a Service (like Google App Engine).

So, my personal executive summary, containers are the technology within the Linux kernel, Docker is an userspace framework to deploy containers on a host, Kubernetes is an orchestration platform for deploying Docker on a cluster of nodes and OpenShift (v3) is the overlay on top of Kubernetes to provide users a developer-centric interface for devs to deploy their applications on the infrastructure.

That being said, here is my rant about Docker: Container technology has been out there for a while. OpenVZ was one of the first projects that caught on but it required a custom Linux kernel. When things like cgroups made it into the mainline kernel, projects like LXC could create containers using the normal Linux kernels. Docker didn’t do anything new regarding containers, but it did provide an awesome, git-like, image distribution mechanism. Also, what Docker really did, from my point of view, made containers ‘cool’ for the market. Probably because it had some corporate backing, and made things more user friendly, it got further than the the other projects. And, so, this is why ‘Docker’ is the buzzword at all conferences in the last year. So know what Docker is and what it is not!

Day 3

I started the last day with a presentation about perf. Perf is one of those things that do magic and it’s very nice to hear about. Only it doesn’t make a very good presentation topic. It’s better to see it in action, with hands on examples.

The next presentation, about Linux Bridge, Open vSwitch and DPDK. I knew about DPDK from within Ixia, being an userspace implementation for network drivers, optimizing packet processing by removing kernel operation from the process. The point of the presentation was showing performance results for Open vSwitch and DPDK setups compared to normal Linux Bridge and normal Open vSwitch (in kernel).

After that, I caught half of a presentation about Fedora Server. I am not a Fedora user and I was surprised that they started delivering Fedora in a Server and ‘Cloud’ version. Apparently I am not the only one surprised, because a lot of people share this view. In the Red Hat world, you have RHEL Server, the enterprise version, CentOS, the community backed version of RHEL and now there is Fedora Server which nobody really knows what it’s going to be. Because in a server environment you want stability and support. Both RHEL and CentOS are supported for many years while Fedora is supported for about a year. So the idea of Fedora as being a cutting edge but unstable server distribution is strange. They want to keep it as the beta grounds for what RHEL and then CentOS will become in a few years.

After two and a half days of ‘containers’, the presentation awaited by attendees was the one about super privileged containers. After all the talks about how containers and atomic hosts are good for security and ease of deployment, people had to focus on downsides. Like the fact that you want to install something in a container, but you can’t because you don’t have access to the host. Enter super privileged containers that are more than containers but less than hosts. The presentation explained the concept and the current, rather unstable implementation.

My last presentations were storage related. I went to a presentation about Ceph. This is a storage technology that I learned about while attending LinuxCon Europe 2013 (actually, the presenter was someone with whom I talked to at LinuxCon). The company behind Ceph was acquired by Red Hat and now they are trying to integrate it in Red Hat Storage solutions. They gave an update about their work on CephFS, a POSIX compatible file system that works over an object type storage. The architecture is interesting and could prove important for large clusters (aka ‘big data’).

And the last presentation was about lvm2 and about the new features available for Logical Volume Management. The important news was about cache management for logical volumes, along with some features implemented that were inspired by mdadm.

To keep this article short, I will leave the conclusions for part 3. I will link videos and slides to the presentations worth watching.

DevConf 2015 – Part 1: Introduction

This weekend I attended DevConf 2015. It was probably the first time I went to a conference that I didn’t plan on joining (or didn’t even know about) a month earlier. But since I was in town and I had a free weekend, I went and can’t say that I regret going, because there were some nice presentations and workshops, and left with actually find out about some new things.



DevConf (aka is a Red Hat organized event for people involved in projects from the Red Hat ecosystems. This includes Fedora, CentOS, OpenSwift, JBoss and others. It is organized in the European city where Red Hat has the biggest development office: Brno, Czech Republic. Registration was not needed so attending was free.

This year’s edition lasted 3 days, from Friday to Sunday (today).

The venue was the Faculty of Information Technology at Brno University of Technology, which, as I found out, was a great place to have a conference. The university is very modern and had all the requirements for hosting an A class conference. I would also like to point out that the faculty building is actually an old monastery and the architecture was very interesting. Even though from the outside of the walls, the massive building seemed hundreds of years old, the inside was as modern as you could imagine a 21st century university.

See photo album.

As a short introduction, I should say that the conference was organized into several tracks, including kernel, networking, storage, virtualization, containers (I’ll get back on this topic in part 2), software quality, security middleware and many others. Presentations were about 40 minutes long and videos are available on the YouTube Channel of the conference. The workshops were about 90 minutes long and some, unfortunately, not available online because some were hands on. There were also some lightning talks  (which I didn’t attend). You can see the schedule at

Day 1

Because it was a Friday, I didn’t attend much of the presentations. I did watch the opening keynote about The Future of Red Hat live online (I love YouTube/Hangouts for the On Air feature).

Also, I did attend a lunch presentation about Unikernels. The reason being that I saw OSv in the description. OSv is something I’ve heard in the past when I tried to search for modern growing kernel projects that were not Linux or BSD. I did learn more about OSv and also about MirajeOS (written in OCaml), including some strengths compared to complex kernels like Linux. The two have potential for becoming something interesting in the future, but they are currently so early in development that they are nowhere near ready for production. I think that now they are only useful for research in the field.

A lot of the other presentations of the day were focused on cloud and containers (that would be Docker – more on this in part 2).  I would have gone to a presentation about Docker security and Docker deployment.  There was a presentation about Foreman, something I recently heard about and would have liked to learn more. And it would have been useful to go to a Kubernetes and Fedora Atomic because workshops the next day were about them. I will be discussing about these projects later, when I talk about day 2 and 3.

In the evening, there was a party with food and drinks for the lucky
attendees that had tickets.

To be continued in part 2. Executive summary (and conclusion) in part 3.

Tribute to my Chromebook

Today, something that doesn’t usually happen to me, happened: I broke an electronic. More specifically, my Chromebook fell from a high surface and the screen was damaged beyond repair. So I thought that, as I say farewell to this great piece for hardware, I would publish the reasons why I think the it is awesome.
I think the Chromebook is the best thing that happened to laptops since… laptops. Of course, it’s a subjective view, but I have my arguments. But these devices were a big step forward because they cause a shift in the way we see a laptop.
As you might know, the Chromebook is Google‘s take on a laptop. It’s defined as a laptop made for running ChromeOS, an operating system developed by Google, on top of the open source ChromiumOS project. This is an operating system based on Linux (so it’s a Linux distribution) that basically runs one application: the Chrome (Chromium) web browser. So all you can do with it is browse the web.

Why would you need such a thing? Why you pay money for such a thing when you can have a normal Windows/MacOS/Linux laptop that runs anything AND a web browser? Well, first of all because it’s cheaper that an normal laptop. The hardware is pretty basic and since there are not Operating System licence fees attached to it it makes them the cheapest laptops out there.

Yes, but you still can ONLY browse the Internet. Think about it. How much time on your computer is spent doing just that? Reading your mail on GMail or Yahoo, interacting with people on Facebook, Twitter, LinkedIn, G+ or Reddit, watching a video on YouTube or Vimeo, learning about new things on Wikipedia or reading blogs and other news sites. Google, as well as some others (like Mozilla) noticed how people are shifting from local applications to web based applications. And Chromebook is a device made specificity for those who spent 90% of their time within a browser.

If that isn’t enough, you have Chrome addons. I am not a big fan of browser addons but tons are available. For example, you have a SSH client or an IRC client or even a console written in the native client (NaCl).

Now, I am not saying that this is something for everyone. Far from it. First of all gamers. This would be useless to gamers as a main device. But they usually have a high-end desktop and for that. And those that do that, would also need a mobile device and, as a secondary device, a tablet or a Chromebook is what they need. Chromebooks are also not useful as a workstation for employees that need specific tools in their OS (like programmers). Though some web-programmers might find them even better than normal laptops. Also, Google, that has one of the world’s largest programmer army, uses Chromebooks internally so maybe even they can find the perks in one. And there is another group that might not appreciate them: people that don’t like Google (because this is as a “Google product” as it gets).

But I think that there is a huge market for these devices: non-technical people that need something simple, that just works. Think of how many times a friend or a relative of yours that wasn’t good with computers asked you do reinstall Windows on their computer, or that asked you do get rid of a virus they had or just to fix something in their operating system. For more than 20 years, computers have been moving out of the hands of IT professionals into the hands of normal users that don’t fully understand how computer and operating systems work. And what they need is a device that can do what they need them to do and it’s as simple as possible to work with. Apple gained a large market promoting the “it just works” feature compared to Microsoft’s buggy Windows. Google went a step further making an OS that is really REALLY hard to break.

Did I mention the extra fast bootup time and the very large battery lifetime?

Moving on with the laptop paradigm change, another feature that Chromebooks have is inherited from Android. The reason I like Android so much is that I am not tied to the physical device. I have a Google account where everything I need is attached and synchronized. If my phone brakes or it gets stolen (as it happened with my previous phone), I buy another, login with my account and I get every back as if I had that phone forever. This is a feature that wasn’t imagined 10 years ago but now is pretty standard and I think it’s something amazing.

And if this is not enough, here is what I think it’s the best feature and the best potential user market: the security needed for enterprise. If you start thinking from the IT department’s point of view, this is a gold mine. And many companies, not just IT companies have IT departments. And, with Chromebooks, they don’t need to do hardware benchmarks for employee equipment anymore, they don’t have to prepare a special OS image for the employee, they don’t need to keep it up to date and virus free, they don’t need to track and manage software licences, they don’t need to worry if the laptop gets stolen. It’s the IT guy’s dream for the company.

And security wise, ChromeOS builds on top of Chrome’s security thanks to the easy update system. No WindowsXP end-of-life problems with a Chromebook. And just because the systems is so simple (relatively speaking) and you don’t have access (or need it, for that matter) to admin privileges, a lot of system security holes are filled from the start.

I must confess that I might not be in the main category of Chromebook users (because I do many non-standard things on a computer). But I would buy my parents one. I received one from Google because I was a Google Student Ambassador in EMEA.  And what kind of Google representative would I have been without a Chromebook. I was the proud owner of a Samsung Series 5 (ironically code name “Alex”). This was the first publicly available Chromebook after the CR-48 prototype.  I got my Chromebook in September 2011 (so I think I was one of the first owners in Europe) and it still worked perfectly, receiving updates 1-2 times a week, until this morning.

Good bye, you have served me well!

my Chromebook

Online education: Duolingo

Have too much time on your hands and want to learn a new language (and I don’t mean C++, Python or Ruby) ? Or do you want to catch up on your forgotten French or German learned in high school? Duolingo might be what you need.

About two years ago, some friends of mine who moved to German speaking cities told me about a language learning site called Duolingo. I created an account, I saw that you could learn German, French, Spanish and others, but didn’t really have time to actually do anything. Some time later, when I had  some free time and wanted to rehash my foreign (non-English) language skills, I quickly searched for some sites that can teach but they weren’t that exciting because they lacked interactivity. They were basically class notes posted online. A few offered videos so you could hear pronunciation and some had some quizzes for self evaluation. But then,I rediscovered Duolingo and my search was over.

Doulingo is a little different than other language learning sites and it’s probably why it is successful. First of all, it’s very interactive. After you choose a language that you know (let’s say English) and a language that you want to learn (let’s say Spanish) you directly start learning by doing. No tons of theory before being asked to do a simple exercise. You start with the exercises from moment 0. You always get instant feedback and it’s amazing how intuitive things are and how many things you can learn instantly.

There are a range of different exercises, balanced out, organized in a knowledge tree. You either read and/or hear a text in Spanish (using as example) and you have to translate it into English (also using as example) or get an English text and you need to translate it into Spanish (at first you get easy texts that you can can successfully guess what they translate to).  You also can get single word (usually objects) association tasks, where you either get an image and you need to type in the word describing the word in Spanish or the other way around. And probably the most interesting task is the speak recognition where you get the chance to pronounce the learnt words (it has a rather high number of false positives, but it’s still useful to focus on speaking too).

Another interesting thing about the architecture of Duolingo is that it has a smart knowledge tracking system. The chapters don’t just have static lessons and tasks. Each exercise is generated considering what words you know and which ones you need to exercise more. The system keeps track of the words learned and when you last used them, and if it has been to long since that happened it makes you repeat them. For example, basic words in the first chapter will probably be used in most of the other chapters so you are “strong” with those. But some others, less likely to use in generic exercises, will become “weak” and Duolingo will suggest you do more exercises with them.

You learn something by keep repeating things. And Duolingo helps you repeat what you need. But repetition is sometimes boring, so the site makes everything like a game. And it doesn’t matter if you are young or old, games are always attractive.

They add, in a game-like manner, several incentives to keep you going. Like points that you earn by doing each lesson and levels when you reach different point values and bonuses if you study every day. And if you add friends that also use the site, you can compete with them on who earned more points (who studied more), making things more social. Combined with a very nice interface both in your browser and within a mobile app, it makes the interaction very appealing and it can become really addictive (in a good way).

Another reasons why I think Duolingo is so powerful is the fact that it’s community driven. It’s not developed just by one company or a group of people, but by an entire global community. All the language courses are distributed across volunteer groups all around the world. For example, the course “English for Romanian speakers” is done by a group of Romanians who… speak English.

Also, it can be developed by you, in an open source-ish way. Some exercises still have bugs and you can report them and/or suggest correct solutions while inside the lesson. And if you are unsure if something is correct or not, you can have a discussion on the many forums available. The forums are formed by people like you (some beginners, some advanced) and they are very interactive and busy.

And what I am glad to see is that the site is constantly evolving. They have a program called Incubator that is a beta program for news courses. And there are a ton of new courses being developed. When I first signed up, there were 6 or 7 public courses. Currently, there are 21 that graduated from beta and are officially launched, with another more than 30 in phase 1 or phase 2 of development. And yes, you can beta test courses that are not officially released.

Of course, everything takes time and effort. Don’t expect to learn over night. Just to give you an example, never having taken a Spanish course in my life, I started the Duolingo “Spanish for English speakers” (I am native Romanian speaker but I am proficient in English) course about a year ago. It look me about 9-10 months to finish all the exercises available (redoing some lessons in the process). My mother took the “English for Romanian speakers” course (that was still in beta when she started) and took her about 6 months to finish all the lessons. And tough neither of us can have a decent conversation in the new learned language, we can now at least, understand what others say or write in that language.

That being said, I would like to leave you with the following TED talk. It’s a very interesting presentation that shows you how important the first steps in learning something new are.

(note: this article was stated on January 1st 2014… took about one year to publish. It has been a busy year)

VPS Security

I recently decided to migrate this site from an older server to a VPS. I went with IntoVPS because I got some positive feedback from people I know and use their services. Although I am a big user of Ubuntu Server(LTS), I went with a CentOS install. My first Linux interactions were with RedHat based distributions but for the last 8 years I have been almost exclusively using Debian based distributions. So I decided to remember how the Red Hat world looked and changed scenery.

I got the server last week but haven’t had time to do anything with it. I turned it on, ran a ‘uname -a’ to see what I was dealing with and left it alone. When I came back and logged into ssh, the login banner said I had about 20 000 failed ssh login attempts in less than a week. Time to add some security. And, since I am doing that, I thought I should also document some security practices that I think are important for a system.

To start of, simple user and password tips. The install came with a root account to use and a random generated password. Never ever ever EVER use the root account unless you really REALLY have to. So I added my own user, and gave it administrative access via sudo. Of course, it’s important to have a good password. And my opinion is that it’s more important to have a password with a bigger length than a complicated one. I try to have both a decent length and different character types. But having an user with sudo access is useless if the root password is still there. So, next step is to remove the root password.

Be very careful not to lock yourself out when doing this. Verify that you have another way to get access to the root account before removing the password (sudo su access or ssh login via private key). Also, remember that a blank password is not the same as no password. What you want is the latter and you can use:

passwd -d root

Next: SSH. Since this is a VPS, SSH is a vital service (though I am a fan of WebUIs, like gmail, I wouldn’t go for a web interface to administer a server… CLI is all I need). A good practice is to remove the possibility to login as root over ssh. You do that by editing /etc/ssh/sshd_config and adding to it:

PermitRootLogin no

There are some other things you can change in the config, like the timeouts and max failed attempts. Don’t forget to restart the daemon:

service ssh restart

Another idea would be to change the (TCP) port that the sshd is listening on. Why bother? Well, even though you configured a good password against bruteforce, these attacks might still cause you harm. Because even if someone tries a password and fails, they still managed to connect to the SSH process. It opens a TCP connection and consumes CPU and memory to verify the credentials.

So what you can do is stop an attack lower in the TCP stack, at the Transport Layer. You can change from the default port of 22 to another port (for example 8022). Since most bruteforce bots will try only port 22, they will fail because that port is not open on your server. Bad news for the hacker, good news for your server.

Just remember that when you are configuring the ssh daemon, you are probably ON the ssh connection. Be sure you don’t close the connection without having a way of getting back in. And make sure that you understand what changing the port means. 22 is the default port so you don’t have to specify it when trying to connect via ssh. But when you change it, whenever you connect via ssh or scp, you need to manually type in the port (and DON’T FORGET what you set the value to). And also remember that some firewalls may block strange ports so might be on a network that only allows standard connections like 80, 443, 21, 22, 25, 53. Be sure that when you make a decision, you make an informed decision.

Still paranoid about attacks? Good! Another good protection is to install a tool like fail2ban. It detects brute force attacks and tries to block them in your firewall. It takes information from logs, detects malicious activity and adds rules in iptables and hosts.deny. Just install it and watch it do its job.

Now for the extra paranoid section: service versions. Some attacks are based on vulnerabilities found in certain software versions. Keep your software packages up to date because older packages are always more vulnerable. But you can also reduce risks by hiding some public information like the version.

In Apache, for example, the server, by default sends version information. You can edit the configuration file (/etc/apache2/apache2.conf or /etc/httpd/httpd.conf) and add (or set) the following directives:

ServerTokens ProductOnly

ServerSignature Off

And remember that some services need to be accessible from outside (like the httpd) but some don’t. For example, if you need a mysql server that is only required by local services (like Apache) don’t make it accessible to the outside. Bind the process only to loopback ( and/or ::1) and not to IPs of uplink interfaces. Use netstat to check this:

netstat -ntlup


Hurricane Electric and Dynamic DNS

I have been using several (free) DNS providers including the ones where I bought domains from and specialized providers like  Lately I have been using Hurricane Electric as my free DNS provider and I found to be the nicest so far.
HE is know for its Tunnel broker services that provide users with IPv6 connectivity via IPv6 over IPv4 tunnels to their servers. The fact that they were IPv6 friendly made me test their DNS hosting services. As I found out, they are (were) one of the few that have IPv6 enabled serves and offer DNS hosting for free.

They have a very friendly user interface and reliable services. Yesterday I tried the dynamic DNS feature, since other dyndns sites I used either closes, became non-free or weren’t as reliable. Dynamic DNS is used on hosts that don’t have a static IP (for example, when using DHCP) but need a (sub)domain pointing to it. Configuration is easy and it usually requires a client that periodicity tells the DNS server the new value of the host IP. A simple guide can be found here.

Since I wasn’t satisfied with the dyndns client, I wanted to do it with my own client. All I needed was the HE API, which is a simple web page that takes parameters from the URL and some basic bash knowledge.

My particular issue was that the server that needed the dynamic host is behind a router with NAT. It has both IPv4 and IPv6. The IPv4 address is a private address (behind NAT) so the address for the A entry is the public IP of the router.  The IPv6 address on the server is publicly routed so the AAAA entry needs the address on the server interface. Best way is to get the address that the Internet sees. For example, using sites like

So the simple script for a dyndns client became:


IPv4=$(wget -qO-
IPv6=$(wget -qO-
wget –no-check-certificate -qO- “$IPv4”
wget –no-check-certificate -qO- “$IPv4”

Just add the script into a cron job and you’re set.

Linked lists

If you are a computer science student or have ever taken a data structure course you must have heard about the concept of linked lists. The structure is a way of storing a set of data, similar to an array. Unlike an array, you can’t access the items with an index, but on the other hand, you don’t need to to have fixed size of the data set since elements can be added or deleted at any time. Operation with this data structure usually take more and each element needs more memory space than one in an array, but you can have a great number of elements without needing a contiguous memory space like you would with an array. For the rest of the article, I’m going to focus on the C programming language.

Since I was first taught about linked lists, in high-school, the concept was the same. You need to define a structure with some fields for the data that you want to store in each element and an extra field (or more) that is a pointer to an instance of this type of structure. You can have a simple linked list where in any element you always keep a reference (pointer) to the next element in the list (you’ll know the list ends when the element has NULL as next) or you can have a double linked list where you have both a next reference and a previous one. In the first case you can only go through the list in one direction, while in the second case, you can go back and forward as you wish. Here is an example:

struct my_linked_list {

int id;

char name[MAX_LEN];

struct my_linked_list *next;


struct my_double_linked_list {

float x;

float y;

struct my_double_linked_list *next;

struct my_double_linked_list *prev;


For any actions on the list, you need to have a pointer to one element of the list (usually the first element). You can treat the list as other abstract data structures, like a Stack/FIFO where you only push and pop the first element (top) or a queue where you add after the last element and take out the first element, or you can just consider it a generic list and add/remove element even in the middle of the list. More information about lists in C here and here

What is important is that for each type of list (list of inters, list of strings, list of a custom structure) you need to write functions for that specific to that list. So if you write functions for adding, removing, searching elements of an int list, you can’t use the same functions for a list of floats. You need to declare a new structure for a list element and functions to work with it. Keep in mind this in not C++ or Java where you can have an Object class.

When I took my kernel programming class, I found out about the implementation of linked lists in the Linux Kernel. At first, the concept was strange to me, because this implementation offered an API to a defined structured called struct list_head that worked different than what I knew about lists. Instead of defining list structure with my data as an element, I had to just add this list_head element as a field. So rather than having a specific list element structure that contained a generic type like int, char[] or another custom structure, I could use a generic structure (that contained generic fields) with this generic list element type. And the API provided generic functions to add, remove an iterate given a list head. An example would look something like this:

struct my_structure{

int value;

struct list_head mylist ;


At first look, it might not seem like a difference, until you realize that the list element is not tied to the type of the structure. This could mean that you could have a list of elements of different types (and now we can tell C++/Java fans to quit bashing C because it’s not flexible). Yes, you do still have say in code that an instance of that structure will be in a list, but in the classic implementation, you needed to describe the structure anyway (if it wasn’t s simply type like an int or float) and then describe a new structure that was a list element with a field of the first structure. And now we see how generic the implementation is.

Once you get passed the initial shock of things being upside down, you realize how power of this implementation. For example, an element can be part of more than one list at the same time (just add a list_head field for each one). And you don’t need to reinvent the wheel implementing each list and its operations.

These lists are used throughout the kernel. For example, task_struct, which defines a process basically, has lots of lists associated with it. This is a critical structure in the kernel, but if someone needs to add extra functionality that involves a linked list, he/she doesn’t care about breaking other lists by changing list implementation. Someone would just add a new field in the task_struct and use the same task_struct instance part of the new list, keeping everything consistent.

Recently I was reading an book with interview questions and there was a section about linked lists. And the classical theory was presented and the answers were implementations using that classical model. My own answers were using this Linux kernel style implementation I noticed that they made the problems much more easier to solve. So now I keep asking myself why isn’t this implementation more publicised. I know it’s used in non kernel projects (the implementation is very easy to port, everything is in the lists.h file).

To find out more, check out the tutorial on the wiki and also take a look on a similar article about linked lists.

Online Education: YouTube Education (again)

I imagined I was going to end this series a while back, but it seems that I keep finding related topics. I already discussed this particular topic before and I am just going to make some amendments.

Recently, I keep running into some amazing science channels on YouTube. And what’s interesting is that these channels form a very tight network. People that run them always refer one another (in the video) on some topics. Google seems to be the one behind this growing network, sponsoring the independent creators.

Take CrashCourse, for example. I really enjoyed the World History and US History and partially followed the Chemistry and Biology series. The channel is run by John and Hank Green, the Vlog Brothers. I also starting watching the vlogbrothers channel, their initial project. It’s interesting to watch because they talk about a lot of different topics and it’s better that a TV news show.

I knew about Hank’s SciShow channel, but recently discovered something made by John: Mental Floss, a list show with interesting facts. Through these channels I got linked by mentions in the video to some others.  For example, GCPGrey, about which I already know, but to others new as well. I can’t remember what was the path to discovering each one, so I’ll just list them.

Veritasium is a physics channel by an Australian name Dererk. It’s full of experiments and explanations. An interesting thing that Derek does is to go outside and talk to ordinary people and discuss physics with them, showing them day-to-day experiments and asking them what they think will happen and then letting them see what actually does happen, and explaining with the answer to why things do that they do.

Another physics channel is MinutePhysics. This show has lessons on simple things like how does a mirror work and what are tides to complicated thinks like the Theory of relativity or what is dark matter. They are rather short so people that have a hard time focusing for longer periods can learn things quickly. A related channel, by the same team, was spun off, focused on things about our planet, called MinuteEarth.

Vsauce is also a science/physics channel ran by a guy named Michael. The videos here add a spark of philosophy on science and very often try to answer questions that can’t be answered or just don’t have an answer that we know of. There are some other related channels, Vsauce2 and Vsauce3, focused on technology and games.

And for those that like math more, alongside ViHart, there is Numberphile. This channel explains numbers and their meaning, mathematical concepts and important mathematical formulas and theorems. There is also a related channel called Computerphile. This channel, explains things related to Computers, IT and the Internet.

All these channels, sooner or later, pointed to one another or you can see people from one channel guest starring in another. I find this very important because this networking helps bring new interesting content on YouTube. And I am proud of Google and YouTube that it promotes these educational programs and helps make the Internet create useful content and not just cat videos.

Here’s a video that made this YouTube network obvious to me (also, interesting concepts to watch)

Some of these channels also have something else in common: they are involved in a project called Subbable. Started by Hank and John, the Vblog brothers, Subbable is a framework for funding the content makers. The channels I talked about are not just 10 minute videos done in 10 minutes. They are videos that needed hours and hours invested by several people. Those people need a way to make a living while creating these videos. So have daytime jobs and do the vidoes as a hobby projects, but most dedicate most of the time for this and they rely on donations from users. Subbable is a donation based system, where viewers of a channel can donate 0 or more dollars to their favorite projects. The content of the channels is, as they say, “free forever”, but donations are needed to keep that up. Nobody is forced to pay, but donations are welcome from anyone that can.

Here is Hank’s introducing Subbable:

I don’t want to end this post without mentioning one more channel recently discovered. Though not referenced by the other ones, it was in a YouTube ad (one that I didn’t mind seeing). I think you all know about TED and all the interesting video presentations. TED recently introduced a program called TED-Ed. Its mission is to bring educators (people that have knowledge worth sharing), alongside video animators and help create educational videos that all the world can see and learn from. This too is in line with the missions of projects like Coursera and Khan Academy and a lot of others.

As you can see, the Internet is serving its purpose, to provide information that is easily distributed to the entire world. But the Internet always needs humans behind it to create the content. The worlds needs people like these to help spread knowledge and make the human race just a little more smart.

LinuxCon Europe 2013 – part 3

The third day of LinuxCon Europe was, probably, the most awaited because of the star guest: Linus Torvalds.

The father of Linux sat down for the morning keynote answering questions from the moderator and the public. Video online. First question was about what makes a good Linux kernel maintainer. And he answered very nice and mature, not even touching the technical requirements, but stressing the fact that it’s about being responsible. The Linux Kernel needs people that are not just involved for their purpose, but people that have the trust of the community. It needs responsible people that make the best decisions for the project and for the community.

At least in his opinion, Linux is complete as an operating system for years now, because it has everything he needs (I think he wanted to point out that for a long time Linux is mature). But new features are constantly coming to the kernel that that is both good and amazing. Of course, there is always new hardware that needs to be supported.

One interesting question from the audience was about how to get (educate) hardware vendors to do open source. And I think Linus answered awesome by saying that people shouldn’t go around and making  people use open source. People should use open source because it’s fun and it works, and it’s almost pointless to try to put effort to convince people of  that. The best way is to let them figure that out for themselves, and if they find out it’s also good for them, they should use it and we should help them out in that.

Of course, there was the question about why Linux isn’t doing that great on the desktop. Linus pointed to the fact that there is a lot of rivalry amongst desktop protects, but the fight is for the wrong reasons. Rather than just make login screens nicer, they should focus what users actually need: a reliable desktop experience. The Steam and its Linux based OS subject was touched and Linus was optimistic about it. He hopes that will do good for the Linux Desktop market. Linus was actually optimistic about most things, including getting new developers for places like Asia or bringing more women into the dev community.

An interesting question was about what would happen if Linus would retire and when could that be. This is a similar question to “what would happen if Linus gets hit by a bus”. It looks like the Linux Foundation took steps in that direction, an Jim Zemlin said they made a life insurance for Linus. But Linus gave a similar answer: the community would go on even without him. Although he did say that he is still very much useful to the Linux project (less for the technical part but more for his image, as he sadly admitted), if he would retire, there are a lot of developers that could take over his responsibilities.   But he didn’t give any names. As for the moment when he would retire by him self, he said that it’s either when he won’t be able to do his tasks or he won’t find it interesting any more.  But now, neither of that has happened, so thins go on as normal.

The next and final keynote was about  “Living in a Surveillance State” by Mikko Hypponen. Although the video was not posted online by the Linux Foundation, you can find the same presentation at a TED event.

This day was also personally important, because it was the day  I had my own presentation at LinuxCon. I gave a talk about how our education system could benefit from values of open source. How small communities can spark new ideas and projects that could help spread knowledge and how can open education be more efficient in the modern world.

The forth day was dedicated to the Gluster community. The next days also hosted events like Linux Automotive and Embedded Linux conferences, but, personally I didn’t attend.

And that wrapped up LinuxCon Europe 2013. Is was a very interesting experience and it was nice to be in a place surrounded by technical people that see the value of open source. It was not gathering of idealists or outcasts meeting to talk about how the rest of the world is wrong. It was a gathering of people that see potential in the idea of Linux and Open Source and want to collaborate with similar people to bring new and good things to the world.

There were some things that I expected but didn’t happen. One was to see more Android and ChromeOS things, so I was kind of disappointed of Google’s presence (or lack of )  at the conference. But more surprised was the lack of presence of Valve and SteamOS. At LinuxCon North America, Valve announced that they were betting big on Linux and I was expecting that LinuxCon Europe would be the place where they would announce SteamOS (still waiting for that). Apparently Valve is just focusing on the US for now.

Overall, I really enjoyed the atmosphere there and I hope to attend the conference in the next years too.  LinuxCon Europe 2014 is rumoured to take place in Germany (I was hoping for Vienna). Hope I’ll be able to go. Until then, I’m proud to be a member of the Linux Foundation and I wil be enjoying my email.

That’s a wrap.

LinuxCon Europe 2013 – part 2

[See part 1]

The second day of LinuxCon started out with some keynotes on Cloud Platforms, but since the subject didn’t fancy me, I spent the morning at the KVM forum where I got the “weather report” for the community. KVM looks like a very strong community with lots of achievements and many plans for the future. Though I found out I was a total outsider to KVM, I did learn about the “Bit KVM Lock” that is a serious issue or the scalability of the system.

The rest of the morning, I visited the stands back at LinuxCon. Most of the time was at Intel’s Tizen booth. I did know about Tizen being a Linux Foundation backed project as an alternative to Android, but that’s about it. I asked the Tizen guys _a lot_ of questions. I found out that the architecture is very similar to Android’s but it relies more on features already available in Linux (such as SELinux) rather than reimplement new things (like Android does with the Dalvik virtual machine). I learned that they have their own application development environment (also Eclipse based) but rather that make their developers write Java, they make them write HTML5. This made me think if apps made for Tizen could be easily ported to FireoxOS and vice versa (we shall see).  They also have a native app environment where you can write C based applications. And I was curious how a HTML5 app would communicate with a native app. The answer was: WebSockets. Tizen is to be released next year, and Samsung, Intel and other companies have invested a lot in this project.

Seeing the Tizen presence at the conference, made me think about something: there was absolutely no Android/Google at LinuxCon. Although Google’s Android (and to some extent, ChromeOS) was praised during some presentations for bringing Linux to the everyday user and making it so embedded in the world, I was surprised that Google or someone in the Android community had sent representatives to talk about invocations of the project. Is it maybe because “Android has won”, there is no need to invest in talking about it anymore? That seems a mistake.

One the the highlights of the day was the Kernel Developers Panel (video available on Hosted by LWN’s Jon Corbet, its main guest was Kernel maintainer Greg Kroah-Hartman. Greg talked about how hard it is to maintain kernel subsystem because it’s difficult to find and keep committed maintainers. He also pointed out the fact that, apart from tweeking some features, the core of the Linux Kernel is pretty much stable and almost all development is being done on device drivers. And the need for them is always present.

After the Developers Panel, I went to a presentation hosted by the same Jon Corbet from Linux Weekly News: The Kernel Report. It was probably the most interesting presentations of the conference. He started with some statistics and observation on those statistics. First of all, the time between releases has shrunk (but he noted that is is a limit on how small a release cycle can be, and we’re getting there) and  the number of contributors is on the rise. But the number of independent/unaffiliated/hobbyist contributors has gone down and the number of company backed contributors has gone up. Still, the need for kernel developers is still high. This and along  with other statistics will, probably, be available online for the yearly report. On interesting statistic that he made for the origin of the patches sent to the Linux Kernel was based on the timestamps of the emails (because the domain name of the email doesn’t help a lot). The proportion was : Europe (I’m assuming it also includes Africa, though not included in the presentation) ~40%, the Americas ~30%, Asia ~20%.

Looking to the future, the Kernel Report listed some interesting technologies that should be focused on in the next years. It included things from mobile to data centers, from data storage to networking and security. Among the things mentioned was, for example the need to think about using huge pages by default in the system’s memory management. Seeing how memory resource needs have grown, the page size introduced 20 years ago might not scale anymore. And he pointed of this trend in OSv, an operating system built for the cloud (virtualization) that only uses huge pages. The system scheduler is another components that needs tweeking in order to provide better power management, and mentioned the idea of the tickless kernel. Improving filesystems and SSD support is a very important topic, mainly in the data center world. Multipath TCP is something needed in the Internet, but the Linux kernel is falling behind in implementing it (especially since Apple announced that it was implemented in its iOS).  The new nftables was mentioned. It is a new firewall system that is to replace the 4 current tables (iptables, ip6tables, arptables and eptables) into a common system.

At the end of the day, googlers from the Google Open Source Program gathered participants in a room for a discussion of the Google Summer of Code program. They wanted to get opinions from past mentors and students and gather feedback on how get open source communities more involved in this project buy offering more mentors.

To be continued.

LinuxCon Europe 2013 – part 1

A couple of weeks ago, I was in Edinburgh, Scotland (UK) for the 2013 European edition of LinuxCon. It was my first time at LinuxCon and it was the biggest conference I attended (it’s bigger and nicer than FOSDEM). And I got to visit a very nice city: Edinburgh. If you ever get the chance to visit Scotland, don’t pass the opportunity, because it’s a great place to go to.

LinuxCon Europe lasted three days (21-23 October) and it was filled with interesting presentations. It was actually a two-for-the-price-of-one conference, being bundled with CloudOpen Europe and it was colocated with some other events such as the KVM Forum, Linux Automotive Summit, Embedded Linux Conference and with the private Kernel Summit. The theme of the main conference was, of course, The Cloud.

The conference started with a keynote from The Linux Fundation‘s Executive Director, Jim Zemlin with a positive status report of the Linux ecosystem. If you want to see him in action outside LinuxCon, I suggest you to watch Jim’spresentation at TEDx). The next presentation was from someone from Twitter, but it was so full or marketing and so void of meaning that I don’t even remember what it was about. The last keynote of the first day was given by someone from Citrix with a very simple title: “We won. What’s next?”. It was an interesting presentation and it lived up to the title, because it didn’t suggest what to do in order to get a bigger market share in IT, but rather how the Linux/open source model could be used to move other industries forward. One of his examples was the medical world, where the technologies are, in the big picture, very old. This is a place where people could contribute to provide new technologies at reasonable prices.

The parallel presentations sessions were in great numbers and on different topics, as it is typical for a conference.  You always had somewhere do go, but, unfortunately sometimes, interesting two or more presentations took place at the same moment and some other times, no interesting presentations were worth going to. Most of the presentations were cloud-related. I attended some focused on network infrastructure. I went to one about VXLAN to learn more about the technology (I had heard about it in the past, but didn’t know the details). There were, actually, a lot of presentations that touched on the VXLAN subject so I got some useful information about that. It’s hard to look at VXLAN as you would a normal LAN,but you can see the need for it in a world of datacenters. The architecture is is deeply tied to vritualization and virtual network devices (like Open vSwitch).

Being a hot topic, SDN (Software  Defined Network) was present mainly in the form of the OpenDayLight Project. OpenDayLight is a project under the umbrella of the Linux Foundation that has backing from big names in the networking industry (such as Cisco, Juniper, Brocade, Citrix, Intel, IBM, VMware). Its goal is to build a vendor neutral framework for modern networks, to make management of both network infrastructure and services a lot easier. Though I was sad to hear that it was all written in Java.

Another topic I followed was virtualization. I went to a presentation from someone at oVirt, a management interface for KVM -based virtual machines. The presentation was about how screenshots work on COW images and how that can provide features such as live migration on KVM. I had a long but interesting conversation with the guy talking about oVirt, KVM and other virtualization technologies.

In the lobby, where all the partner companies and communities had stands, there was a lot to explore. For example, I continued virtualization related talks at the stands of oVrit, Red Hat and GlusterFS (they seemed close friends). Related to cloud/cluster filesystesm, I found out about from two companies/communities unkown to me (OrangeFS and InkTank) about Ceph. It seemed common in the cluster world, but I never interacted with it. But I learned about its implementations and features (fuse implementation vs kernel implementation,  data replication etc.).

Related to virtualization and distributed storage, I couldn’t have missed the OpenStack and CloudStack stands. Although I’ve heard about these projects a lot, I never knew exactly what they did. I always had the impression that they were just buzznames for virtualization solutions. I had a long discussion with the guys from Cloudstack, and it seems my impressions were right. I got to play around for the first time with the CloudStack interface. My questions for the Cloudstack people made a comparison with VMware’s vSphere (with which I had some experience) and it turns out it’s basically the same idea. It’s just that OpenStack and CloudStack are open source, and that comes with the goods and the bads (except the fact that it’s free, you get the good part – it’s flexible and you can have a lot of strange architectures – but also the bad part of not being so friendly and “enterrpize looking”).

To be continued. Two more days of the conference to cover  and a lot to say.


Online Education: CrashCourse & YouTube Education Channels

I talked about emerging online education platforms like Coursera, Udacity, Khan Academy and what are the features that make those projects interesting. But I couldn’t close this series of articles without mentioning a smaller but cool project that I found.

Crash Course is a YouTube channel that offers courses that you would find in a school curricula, but in a more non orthodox way. Each course is made up of about 40 videos of about  10 minutes in length each. So far, the channel offered a course on World history and one on Biology with others (US History, Biology, Literature and Ecology) in progress.

So far, it sounds like Khan Academy but without a special site for the framework. It’s all just YouTube, videos and playlists, with comments from users in a classic YT way. It’s the simplest way to doing things (this is how Khan Academy actually started).

What makes these courses interesting the the way the educational information is presented. It’s the way you wished that your highschool teachers taught every course: in a fun way. The lessons are filled with (clean and smart) jokes, interesting pop and geek culture references and cool animations (with the help of a graphics team). The ratio of playing  and learning is close to perfect so you can watch these short videos and learn while smiling.

The story of the channel starts with two brothers that had their own youtube channel  which became popular on the Internet. Google offered them some funding through a program that promotes original and useful material on YouTube. With the sponsorship from Google they developed their own online TV show-like channel called Crash Course. The videos produced are or the same quality of a show filmed for a large TV station (so it’s not just about home made videos).

Recently, I found another channel that have the two brother’s trademark: SciShow.

And since I am presenting things that are actually educational on YouTube, I would also like to tell you about two other cool channels.

One is ViHart, made by a young girl that loves Math. She has some interesting videos both funny and informative. Her channel was noticed by Khan Academy and her videos are now part of Khan Academy’s math section.

Another is CGPGray. Also has short informative videos on various subjects. Although he doesn’t have specific courses or topics, I learned things from history to politics to science.

It’s nice to see that YouTube is not filled with useless information that people just waste time on, but also has many resources from where people can learn new things. And it’s great to know that you can teach things to a world wide audience with a rather low logistics effort, all thanks to YouTube and the Internet.

Digital education for kids: Scratch

A friend of mine, Laura, is involved in an interesting project called DigitalKids. It’s an idea of teaching young children (8-14 years old) how to use computers as an after school activity. The reality is that kids these days are born with computers, tablets or smartphones in their hands, so nobody needs the extra effort to get them in touch with a computer for the first time. But the idea is to teach them how to do useful things.

Related to one of the weeks for the DigitalKids course, I was told about a very interesting tool called Scratch. It’s an actual programming language, but now in the way you would usually think of a programming language. Instead of writing code, you drag and drop objects that represent things like conditions, loops and operations. Instead of writing a successions of lines of code to make it do something, you tie these object blocks to one another in a logical succession. Everything is very visual.

Scratch is an open source platform developed at MIT with the exact purpose of teaching programming to kids. It’s open source so it’s available on all platforms (Linux, MacOS and Windows) and was translated in several languages (including Romanian). It makes it very accessible for any user.

Kids can create simple programs like animations or even games. They can publish their projects online for people to see their result as a Flash object.

I stumbled upon it again on a site called, that promotes teaching programming in US schools. They have a video about Scratch.


Online Education: Codeacademy

This idea of online education is not something that new, especially in the IT world. Since its beginnings, the Internet was used to distribute information to its users. One of the most primitive form of “online education” were the online tutorials found on the Internet. Some of us remember the days (about ten years ago) when the Internet was flooded by sites that offered “PHP&MySQL tutorials”. But today you can find tutorials about anything, from C to Java, from Flash to HTML5, from Linux to Photoshop.

Probably one of the first site that offered some quality organized tutorials  was W3Schools from the WWW Consortium. It taught you to build HTML sites through a learn-by-example and then a try-it-yourself approach. It was like the Internet teaching its users to “build  more Internet” (but learning to create new sites). Now the site offers tutorials on HTML, CSS, JavaScript, PHP and MySQL and others, but concentrating on the technologies of the Open Web (like HTML5).

But modern Internet-aided education means more than just having information available online. It is said that Web 1.0 was the information being put online, Web 2.0 was  the organizing and the linking of that information (using things like RSS to link information and sites like Google to search through the information) and Web 3.0, “The Social Web”, is the the connection of information to people (through social sites like Facebook, YouTube, Google+). The same can be said about modern online education: it is trying to be social, and sites like Coursera or Khan Academy are doing exactly that.

One site that I came across that fits this “Online Education 3.0” description, is Codeacademy.

You will find that it has some similar features to Khan Academy. First of all. users can access courses openly. Codeacademy offers courses about Python, Ruby, JavaScript and HTML5/CSS3. You can just go to the site and start going through the courses, or you can login with an account (it can be linked to your Google or Facebook account) and keep track of the course  progress.

Unlike Coursera or Khan Academy,  students don’t get video presentations, but lessons in the form of tutorials. It’s more like a  lab/seminary than a lecture. But if it lacks multimedia, it makes up with the interactive framework. Each lesson gives you a introductory description of a concept, with code examples and then you get some exercises for you to do to test out the presented concept. You do this in a Web-interpreter (they have one for Ruby, Python, JS/HTML).  You can use the interpreter to test (almost) any code you want. You write your code to solve the exercises, and an automated evaluator tells you if you completed the task successfully. If so, you can move to the next lesson.

The lessons are chapters like “Functions” or “Flow control” and are organized in smaller sections, each with a certain concept in mind. The lessons are grouped in tracks and there is a track for each programming language available. You can go to any lesson at any time and you can do through a lesson as many times as you want. You can also rate each section to give feedback and discuss it with other users on a forum.

Of course, to make things fun and social, we have badges and points. For each lesson successfully completed, you get a badge that you can put on our profile page. You get badges for earning points over multiple tracks, or get points/badges for doing tasks each day, over multiple days (to make you come back to do more tasks). Here’s my profile.

But as we said before, it’s not the information being offered to the user, but the framework of the site. And Codeacademy offers an infrastructure where not only can you learn but you can also teach. The site allows you to create your own lessons. It has a very userfirendly interface for adding lessons and exercises, start code for the interpreter and tests for the interpreter to run to verify a student’s code. This is important to the peer-to-peer model of education, because users can easily create new content for other users.

So if you want to learn a new language in a fun way,  go to and start coding.

Online Education: Khan Academy

While people, like those at Coursera, are trying to bring education in the new century, others try to redefine it. Such a person is Salman Khan who decided to start  his own academy.

Khan Academy is a project that plans not only to provide higher education courses, but all kinds of lessons to all age groups. Rather than having entire courses, the information is delivered in the form of microlectures. These microlectures are organised into different topics and categories.

This might sound kind of chaotic, until you introduce the idea of a Knowledge map. This something that links topics together. It tells the user that after you finish topic X you can go topic Y or maybe to Z. It’s not necessarily a hierarchy, telling you that you need to finish topic A to go to topic B because you have to remember the target users: everybody and anybody. You may be a high-school student and you want to learn Calculus without having to relearn first grade math. The knowledge map is a graph and just suggests to you similar topics. And if you do want to learn Advanced Calculus, but you don’t know anything about that subject, maybe you just start with Basic Calculus. And if you start that and you find out we are not able do do that, maybe you should go back to Pre-Calculus. The map shows you the resources you have available.

But how know that you are ready to move on to another topic or need to move to a less advanced topic when there is no teacher to evaluate you? Well, you could evaluate yourself. Or, if you don’t trust your own evaluation, besides the videos there are a set of interactive activities that generate questions on the current topic. When you respond to enough of them, you can move on. Or you can respond to more. Or what the videos again. It’s all as you want. If you don’t understand why the answer to a question was what it was, you can just click for the activity to solve it for you step by step (you can just let the system solve the part you don’t know and solve the rest yourself).

Coursera, Udacity or Khan Academy are successful because of their infrastructure. All have organised information (videos, lectures, assessments) and more important: forums. Khan Academy goes one step further. Their videos are posted on YouTube so they are easy to manage and share. All content is open so you don’t need to have an user or register for a course to access the content. Want to ask a question? Just post a comment on a Reddit-style board for each video and the community will answer.

But since most users of Khan Academy are children, learning also needs to be fun (since we are talking about modern education).  And if pictures and videos instead of text or a multicolor digital screen instead of white chalk on a blackboard or a HTML5-written digital protractor to measure angles isn’t fun enough, you have badges. Most activities earn you points, but you don’t level up, you just keep track of the effort you put in. Earn a number of points, get a badge. Earn a number of points fast, get a badge. Finish all videos and activities in a topic, earn a badge. Ace all activities on a topic, get a badge. Post comments on a topic, get a badge. Points and badges give users the sense of accomplishment and a reword for their effort. You can post your profile publicly (here is my profile).

So it all comes down to the infrastructure, and Khan Academy has a very good one. Like Coursera it’s based on open course ware and the power of the community (peer-to-peer education).  But it adds extra things, like the uniformity of the site. Unlike Coursera where every course has its own rules, Khan Academy has the same look and feel when taking math lessons or taking chemistry ones. And that allows the points and badges system. Also, the site is very modern because it’s very “social”. You can login with Facebook or Google, you can share the videos, you can post comments or discuss topics on Reddit and you can publish your profile.

And more important, because of the well-thought infrastructure, Khan Academy could accomplish its mission: to provide education for the entire world. Because, although currently materials are mostly in English, any language can be integrated. With the help of the network of contributors, transcripts are available in a large number of languages (from Spanish to Chinese).

Hear Khan himself present his Academy at TED:


Online Education: Coursera

The Internet and many modern technologies changed the way we look at education. Although we are still tied town to  the centuries-old traditional teaching methods, new learning opportunities are starting to popup for us. I would like to start a series of posts about Online Education.

Probably one on the most popular such endeavour is the Coursera project. Coursera brought together big universities like Stanford or Princeton in order to provide Open Courseware. What this means, is that materials like presentations (sometimes accompanied by videos), homework projects and assessments are free and open to be accessed by people who aren’t  enrolled officially at the universities at those courses.  MIT was already doing this through its MIT OCW, but Coursera moved one step further.

Coursera focuses on the power of the community, in the form of a world-wide classroom. The courses provided are synchronized with actual classes at the universities. This means that the online class lasts as long as the onsite class. The courses and assessments a published the same time as the ones in the universities. And to make things as close as the real thing as possible, forums are available to that students can discuss among each other and with their teachers.

Some courses provide certifications of graduation for the students, some do not. But everyone can take the course just for the fun of learning new things no matter where they are in the world. Want to take a Compilers course from Stanford? You can. Want to take ca World History class from Princeton? You can. Even if you live in Europe, Africa or on a remote island (as long as you have an internet connection).

Today, more than a hundred courses from more than 30 universities are available with more than 2 million students already participating.

Similar projects: ai-class, Udacity, edX.


This site started out when I bought a domain ( and didn’t know what exactly to do with it. I had the idea of installing an easy-to-use CMS to post technical articles about things I played around with, in order to document them for further use (for myself, but more important, for others such as me). WordPress was the obvious choice, so it became a blog.

Although I tried to keep things technical, some posts had a more personal touch. But from what the Tags cloud has discovered from my posts in the last 5 years, this blog contains stuff about Linux & Cisco (out of my passion for systems and network administration), Open Source & ROSEdu ( although not the solution to all the world’s problems, Open Source and Open Education is something I like to promote to the world), and UPB&CS (the things that happen at the faculty where I study (-ed)). In the first year of the blog, there were lots of posts, but then came a period of pause, and then, the last period, where there were some posts, but rather few.

I bought a new personal domain: And I wanted to restart this blog (yes, I said that I would do that many times before… maybe this time it will work). I will try to have both technical and personal posts (hopefully more of the first). I will mostly post in English (although I will use Romanian when the need will be). The new address will be:

Thanks and I hope you will come back.

Open Source Software on Windows

I use Windows 7 on my laptop. But that doesn’t mean that in the Windows world you can’t find OSS. So I would like to make a list of the programs that I install on my machine when I do a fresh install of the operating system. And for those who didn’t know that there are free and open solutions for some programs, maybe you will find it useful.

Note: the following may contain a LOT of personal choices. Not everybody needs to agree with it.

Without a doubt, the first thing I install is the browser. I am a Mozilla Firefox fan, so the only address in my IE history is the Firefox download page. For me, it is the best browser by a long shot. I also install Google Chrome (theoretically also open, because it’s based on the Chromium project) to keep it as a second browser for other people that want to login into my computer. But I find Firefox to be better for geeks where as Chrome is better for less technical users. But it’s just a personal opinion.

An archive manager is something any system needs (the Windows built-in is not that great). And since I hate the “please buy this” windows in WinZip or, worse, WinRar, I usually use 7-zip. It is nice because it’s free, lite, easy to use and supports any archive type I need (including tar and gz).  I was also recommended another tool, called PeaZip, that I am currently testing.

Since not a day goes by without having to ssh into another machine, a need a ssh client. Putty is the way to go. One thing that Putty doesn’t do is scp. For that I use WinSCP. I don’t even know a non-free alternative to these, but since they do such a good job, I don’t need to know another.

I actually use WinSCP as a FTP client and it does the job most of the times. But there are times when I need something built for FTP. And I do that with FileZilla.

For my entertainment purposes (playing video files and listening to online radio), VLC is the best way to go. MPlayer was something that I used to use but it lagged behind, so I moved to VLC.

I mostly use WordPad for what Notepad should do, but sometimes they lack features or ease of use compared to a sane text editor. I used at some point Vim for Windows, but the truth is that vim is good for CLI. So I started to use Notepad++, which is exactly what the name says. Lite and easy to use, but with all features that you would need. It’s actually very good as an IDE, for programming.

Since we live in a peer-to-peer world, I need a Torrent client. I was a utorrent user, but since they started adding adds and other spyware, I turned to good old Deluge.  Not as full of features and as eye-candy as utorrent, but it gets the job done. And it’s simple and lite.

On Linux, I use xchat2 as an IRC client. But the Windows version of XChat is not Free. So I turn to Pidgin. It is actually not that good as an IRC client (it’s kind of patched up for that) but it is very good for other IM protocols.

I sometimes need to edit/create images. Most of what I need, I do in Paint (yes, Paint). But there are times where I need something else. Inkscape is a good tool because it’s simple to use and even does vector formats (like SVG). For image editing, GIMP is what it’s needed (although I can’t ever say that I really know how to use it… but I can’t use Photoshop either…).

Maybe something that not everyone needs, but it’s almost a must for people in computer networking: Wireshark, for packet capturing. I always end up installing it after a while.

But there are things where I know that there are OSS alternatives, but I prefer the non-OSS ones. LibreOffice and OpenOffice are ok (I usually install LibreOffice) but are still way behind Microsoft Office (or even Google Docs sometimes). And when it come to virtualisation, I prefer VMware (Workshation or Player) instead of VirtualBox (but I do keep vbox installed most of the times).

So, basically, this is how my laptop looks like and yes, there is open source on Windows.

[Techblog] Grub2 and ISO boot

[Originally posted on ROSEdu Techblog]


Grub2 is the next generation Linux bootloader that is trying to replace the “Legacy” Grub version. It is a complete rewrite of Grub 1 and only lately becoming fully featured compared to the old version and even comes with some new interesting features.

The old Grub’s configuration was rather straightforward, everything being done in a configuration file in the grub directory of the /boot partition (it’s a common practice to have /boot mounted on a separate filesystem). In Debian it was usually /boot/grub/menu.lst and in Red Hat /boot/grub/grub.conf (sometimes one being a symlink to the other).

The configuration file for Grub2 is /boog/grub/grub.cfg. But the file itself should never be modified (not even root has write access). Instead, this file is generated by commands like update-grub2. It is generated based on other configuration files like (in Debian) /etc/default/grub, which has things like global configurations, timers and default settings.

The menu entries for the operating systems themselves are generated based on files in the /etc/grub.d/ directory (in Debian). An interesting feature of Grub2 is the fact that these files are actually Bash scripts. OS entries don’t need to be hard coded, but can be scripted. One such script is the 10_linux file that detects any new kernel image in the /boot directory and writes a new entry for it without having to manually add it. Manual entries can also be written in these files (usually in the 40_custom script file).

An interesting new feature in Grub2 is the possibility to boot from an ISO file. A LiveCD can be stored in an iso file on disk and loaded by grub without having to burn it onto CD or having to boot the normal system first. A menu entry for ISO booting would look like this:

menuentry "Ubuntu LiveCD" {
        loopback loop (hd0,1)/boot/iso/ubuntu-12.04-desktop-i386.iso
        linux (loop)/casper/vmlinuz boot=casper :iso-scan/filename=/boot/iso/ubuntu-12.04-desktop-i386.iso noprompt noeject
        initrd (loop)/casper/initrd.lz

Based on the previous ideas, here’s a way to configure grub to make an entry for every .iso file that you have in a specified directory. First, create a directory to store the .iso files (ex. /boot/iso/) and move your Live CDs there.

Next, make a script in the /etc/grub.d/ directory. Let’s call it 42_iso (the number in front dictates the order in which the scripts are executed).



for iso in $(ls $ISO_DIR*.iso); do
    echo "menuentry \"$iso\" {"
    echo "set isofile=\"$iso\""
    echo "loopback loop (hd0,1)\$isofile"
    echo "linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=\$isofile noprompt noeject"
    echo "initrd (loop)/casper/initrd.lz"
    echo "}"


Don’t forget to give it executable access. Then run the update-grub2 command to generate the Grub2 configuration file.

chmod +x /etc/grub.d/42_iso

Thanks to DobRaz for suggesting ISO booting with Grub.

[Personal] Trip to Amsterdam, The Orange city

Amsterdam este unul dintre orașele pe care voiam de mult timp să le vizitez și abia săptămâna aceasta am reușit să fac asta. Împreună cu niște prieteni, am petrecut 4 zile în orașul portocaliu. Și ne-am ales cea mai interesantă zi de a ajunge în capitală: 30 aprilie, Ziua Reginei.

Ziua Reginei (care este, de fapt, ziua Reginei Mamă) este sărbătorită de olandezi prin petreceri și parade peste tot prin oraș. Cerința este să porți ceva portocaliu pe tine. Echipați cu tricourile portocalii, am luat ce mijloace transport în comun mai circulau în ziua respectivă și ne-am îndreptat spre centru.

Străzile, și așa înguste, erau pline de oameni ce petreceau. Oriunde te uitai, vedeai doar oameni în portocaliu cu câte o bere în mână. Vremea a ținut cu olandezii pentru că fix de Ziua Reginei a fost un soare cum se vede foarte rar în Țările de Jos. Bărcile de pe canale erau și ele pline de oameni, atmosfera fiind apropiată de Carnavalul de la Veneția. În parcuri, lumea stătea pe iarba verde, profitând de soarele de afară. În centrul orașului, într-un loc numit Dam, se amenajease un parc mic de distracții. Cei ce nu participau la petreceri au ieșit în fața casei lor și vindeau ce lucruri nefolositoare mai aveau acasă. Tot centrul era un mare Flea Market. Înfruntând mulțimea de oameni, am parcus centrul vechi de la un capăt la altul (de la Muzeul Van Gogh până la Gara Centală).

A doua zi, orașul se recupera după Koninginnedag (Ziua Reginei). Mormanele de gunoaie (doze de bere) începeau să fie ridicate. Străzile ajunseseră de la suprapopulate la aproape pustii (probabil încă nu își reveniseră petrecăreții de la băutură). Am început să învățăm liniile de tramvai și metrou și, în general, unde eram pe hartă.

Am pornit de la zona muzeelor, unde este și semnul faimos cu “I Amsterdam”, cu o vizită la Muzeul Van Gogh, unde am aflat tot ce se putea despre artist. Destul de înfometați, am căutat ceva de mâncare și am rămas foarte dezamăgiți de restaurantele olandeze.

După, ne-am îndreptat iar, prin centrul vechi, spre Dam și spre Central Station (Gara centrală, unde și toate liniile de metrou duceau). De lângă Central Sation, am luat un tur cu barca care ne-a dus pe canalele din centru și ne-a făcut un mic rezumat a punctelor istorice ale orașului. Bărcile pentru turiști erau toate acoperite, făcându-se evident faptul că o zi fără ploaie în Amsterdam era ceva neobișnuit (deși noi am avut mare noroc cu vremea). Am profitat și de roata gigantică (similară cu cea din Prater, Viena) instalată în centru și ne-am urcat într-o cabină ce ne-a dus la o întălțime de unde puteam vedea tot orașul și toate lucrurile pe care le vizitasem până atunci.

Ca pentru orice turișt în Amsterdam, următoarele destinații au fost un bar unde aveau diverse sortimente de beri olandeze, un coffeeshop unde…er… nu am găsit cafea și Red Light District. Plimbarea prin Cartierul Roșu a fost mai puțin impresionantă decât te-ai aștepta și descrierea a ce era pe acolo, well, e destul de cunoscută.

A treia zi am început-o prin a vizita universitatea din Amsterdam unde studiau colegii noștrii: VU (Vrije Universiteit). Deși nu era un campus specaculos din afară, în interior arăta foarte bine. Foarte multe laboratoare și birouri, multe facilități pentru studenți. Am trecut și pe lângă biroul lui Andrew S. Tanenbaum și am văzut pe niște uși numele a unor doctoranzi ce au terminat UPB.

După masă, ne-am îndreptat spre una dintre gările din oraș pentru a merge într-un orășel din apropiere, Zaanse Schans în căutarea Morilor de Vânt. Am coborât în gara orașului și dintr-o dată se simțea că este un oraș mic (abia vedeai oameni pe stradă). Am trecut un rău, ocazie cu care am văzut un pod ce se ridica pentru a permite unei nave să treacă și am ajuns într-un mic sătuleț. Priveliștea era foarte interesantă: case mici lângă terenuri irigate de mini canale peste care treceau mini-poduri. Peste aceste căsuțe se înălțau multe mori de vânt gigantice. Vântul bătea foarte tare și învârtea cu putere paletele morilor.

În acest sat, am vizitat două case, una unde se vindea cașcaval făcut în zonă (lângă care era o fermă de oi) și un magazin de saboți. În fiecare se țineau demonstrații de cum se fabrică cașcavalul, respectiv saboții. Am văzut o astfel de demonstrație, în care, în 5 minute, cineva a confecționat dintr-o bucată din lemn un sabot tradițional olandez în fața noastră.

Terminând cu satul de mori de vânt, ne-am îndreptat spre gară spre a merge nu înapoi în Amsterdam, ci în Utrecht, pentru a ne întâlni cu alți prieteni din România ce veniseră acolo la studii. Utrecht este un oraș medieval cu o imagine mult mai veche decât Amsterdam-ul. Mi-a adus aminte de Bruges, în comparație cu Bruxelul. Nu am avut mult timp să stăm, dar am făcut o plimbare prin orășelul care era foarte liniștit.

Ultima zi am dedicat-o unei vizite la Grădina zoologică a Amsterdamului, după care ne-am făcut datoria de turiști și am cumpărat câteva suveniruri. Printre lucrurile interesante de cumpărat au fost niște saboți din porțelan (de la Delft) ce conțineau bulbi de lalele (faimoasele lalele olandeze).

Vizita în Amsterdam a fost foarte interesantă, în special datorită persoanelor care au avut răbdare să ne ducă în locurile care trebuiau. Au fost patru zile pline de activități și tot nu am reușit să vedem tot ce era.  Amsterdamul este un oraș foarte activ și cred că singurul motiv pentru care nu ai vrea să locuiești acolo ar fi vremea. Aș vrea să mă mai întorc pentru a vizita locuri ca Delft și Rotterdam și a dedica mai mult timp orașelor ca Utrecht-ului.

[TechBlog] Exploiting environment variables

[Part 1 from ROSEdu Techblog]

Environment variables are sometimes very important when creating new processes. For example, the PATH variable, that decides what executable to run.

The easiest example to exploit PATH is to add the current directory . to the list and overwrite common shell commands with something else.

$ cat ./ls
echo P0wn3d
$ ls
file1 file2
$ ./ls
$ export PATH=.:$PATH
$ ls

But that can only affect the user’s shell and can’t do harm to the system. What if some other conditions exist in the system, like the use of the SUID bit. Normal processes are run as the user who executes them, regardless of who owns the executable file (as long as the user who runs the file can read the file). If the SUID is set on an executable file, any process started from that executable will run as the owner of the file, not shell owner. Here is an example of a very insecure source that shouldn’t be SUID-ed.


int main(void)
return 0;

Let’s assume that the compiled executable from this code is owned by root, SUID-ed and put into /bin with the name ls_root.

$ ls -la /bin/ls_root
-rwsrwsr-x 1 root root 7163 2012-03-21 12:28 /bin/ls_root

What this will enable, for example, is the listing of the /root directory by any user.

$ cd /root
$ ls
ls: cannot open directory .: Permission denied
$ sudo ls
$ ls_root

The code simply executes the ls command. But what if the ls command isn’t doing what it is supposed to do? Given this setup, as a normal user, we can do the following:

$ ln -s /bin/sh ls
$ echo $$
$ ls
ls  ls_root.c
$ ./ls
$ echo $$
$ whoami
$ exit
$ export PATH=.:$PATH
$ ls_root
# whoami

The ls_root process will run the ls command. The ls command will run an executable specified by the PATH variable (the executable is /bin/ls). But if the PATH variable is changed in the current bash process, the executable ran by the ls command will now become something else. If the ls_root command is ran by root (with the help of the SUID bit), any of its children will also be processes of root. So, if the ls command will now run a bash executable, it will run a root owned executable that leads to root access.

The SUID is something that is used in Linux systems (sudo and even ping use it), but these executables are very carefully implemented so that normal users can’t exploit them.


[Part 2 from ROSEdu Techblog]

Based on the previous, let’s go one step further and study a similar exploit. This time we’ll be dealing with executables and dynamic libraries.

Let’s consider a simple custom library function:

/* random.h */
int xkcd_random(void);

/* random.c */
int xkcd_random()
return 4;

We can build it into a shared library:

$ gcc –share -fPIC -o random.c

Let’s take a simple program that uses our function:

/* main.c */
#include <stdio.h>
#include “random.h”

int main(void)
printf(“8ball says:%d\n”, xkcd_random());
return 0;

If we want to use out shared object file in the current directory, we have to do two things. First, compile the program and link the shared library (with the -l flag) using libraries in the current directory (we do that using the -L. flag).

$ gcc -o main -L.  main.c -lrandom

Second, the library will be linked at compile time, but it won’t be loaded at runtime unless the loaded knows where the library is, with the help of the LD_LIBRARY_PATH variable.

$ ./main ./main: error while loading shared libraries: cannot open shared object file: No such file
or directory
$ ./main
8ball says:4

To ensure that we can always use the library, we can place it in the system’s library directory. Note that this means that we trust the code of that library and only the administrator can do this

# mv /usr/lib

So now, each time the main program runs, the loader will dynamically load the random function from the system. But what if we have another function, from another library that has the same name, but does something else:

/* evil.c */
#include <unistd.h>
int xkcd_random()
return 666;

$ gcc –share -fPIC -o evil.c

If we overwrite the LD_LIBRARY_PATH variable with the . directory, the loader will use the ./ instead of /usr/lib/ and it doesn’t require any modification of the main program (no recompile needed).

$ ./main
8ball says:4
$ ./main
8ball says:666

This is a similar to the PATH variable hack discussed in the previous article, but at a much more lower level. We can add a possible exploit here, like a shell execution:

#include <unistd.h>
int xkcd_random()
execlp(“/bin/sh”, “/bin/sh”, NULL);
return 666;

Like we did before, we used a root-owned executable that had the SETUID bit set, in order to run things as root.

$ ls -la main
-rwsrwsr-x 1 root root 7192 2012-04-18 15:13 main
$ ./main
8ball says:4

The program executed safely.

The Library Loader is smart enough to ignore the LD_LIBRARY_PATH when the executable is setuid-ed, because of exact such attacks. So even though you can exploit programs as a normal user, you can’t affect system. So low level is a little more secure than scripting level.

[TechBlog] ifconfig vs iproute2

[Originally posted on]

On modern Linux distributions, the users have two main possibilities of configuring the network: ifconfig and ip.

The ifconfig tool is part of the net-tools package along side other tools like route, arp and netstat. These are the traditional userspace tools for network configuration, made for older Linux kernels.

The iproute2 is the new package that comes with the ip tool as replacement for the ifconfig, route and arp commands, ss as the new netstat and tc as a new command.

There are pros and cons for each of them and there are users (and fans) of each. Let’s see the differences…

First of all, why was the iproute introduced? There had to have been a need for it… The reason was the introduction of the Netlink API, which is a socket like interface for accessing kernel information about interfaces, address assignments and routes. The tools like ifconfig used the /proc file hierarchy (procfs) for collecting information. The output was reformatted data from different network related files in /proc.

alexj@hathor ~/techblog $ strace -e open ifconfig eth0 2>&1|grep /proc open("/proc/net/dev", O_RDONLY) = 6 open("/proc/net/if_inet6", O_RDONLY) = 6

The costs for the operations like open and read from these files were rather big compared for the netlink interface. For comparison, let’s assume that we have a large number of interfaces (128) with IPv4 and IPv6 addresses and their associated connected routes.

alexj@hathor ~/if $ time ifconfig -a >/dev/null real 0m1.528s user 0m0.080s sys 0m1.420s alexj@hathor ~/if $ time ip addr show >/dev/null real 0m0.016s user 0m0.000s sys 0m0.012s

But most of normal users are not that geeky to care about millisecond speedup. They do, however, care about usability. And iproute2 does seem to have a better user interface. The ip command is better organized, in what they called objects. Links, addresses, routes, routing rules, tunnels are all objects, that can be added, deleted or listed. If a user learns how to add an address, by intuition, he can easily guess how to add a route, for example, because the syntax in similar.

Keyword shortening and auto completion makes the ip command more efficient by removing redundant characters. The following commands are identical as effect:

ip address show ip address ip addr show ip a s ip a

Some network engineers will like iproute2 because it’s similar to Cisco’s IOS: “ip route show” in Linux vs “show ip route” in IOS. Another usability feature is that you have the \number format for subnet masks instead of the quadded-decimal format, the first one being shorter to write and more up to date with the concept of VLSM.

So what does ifconfig still have to keep it around? Its biggest weakness is its biggest strength: its age. ifconfig has been out and used for so long that it’s very hard to put it away. Still many scripts in the heart of Linux distributions rely on ifconfig to work and most system administrators are used to the ifconfig command and it’s hard to move them to something new and unfamiliar. A lot of tutorials on the Internet about network configuration teach ifconfig and not iproute2 to beginners. For example, LPIC-1, one of the biggest Linux Certification out there, still requires ifconfig skills for passing the exam and barely mentiones iproute2.

When released, iproute2 had at least one advantage over ifconfig, and that was the feature of interacting with the IPv6 stack while ifconfig was only for IPv4. But since then, fans of ifconfig patched it so it could also be IPv6 ready.

But other features were not replicated. In old Linux Kernels, an interfaces could have only one IP address, so in ifconfig you could configure only one IP address on an interfaces. In newer kernels, each interface has a list of addresses and iproute2 via the NetLink interface could manage them. Latest ifconfig versions still rely on the idea of subinterfaces to provide more than one address on an interfaces.

So, given all these arguments, iproute2 should be declared the winner. But it’s not that easy. Just like in the case of IPv4 vs IPv6, where the latter one is the obvious choice, iproute2 will eventually replace ifconfig. Only it’s going to take a long time for that to happen, so net-tools will still be around for some time, but they will be eventually phased out.


This week I want to pay tribute to an open source project called the LXR Cross Referencer. LXR is  a web tool that lets you browser the source code of a software project, navigating link by link based on included source files, functions or variables.

LXR can be downloaded from the project’s website [1] and applied to any software project.

The most popular instance of LXR is found on the project’s initial page [2] as an instance for the Linux Kernel. This site has a complete history of the Linux code since version 0.0.1 to latest stable version. Opening two windows of two different versions of a file, you can compare the code and see what’s been added or changed between the versions.

It’s very useful for finding where a function or a constant has been used, or to see in what header a function has been declared, defined and then used.

Note that all of the above can de done via command line tools like ctags or cscope alongside vim or emacs, with grep -r, diff and git. But the friendly part of is that everything is already on the site so you don’t need to download anything locally and you can use everything there as long as you have an Internet connection.



[CCIELab] IOS + Linux = Quagga

[Originally posted on]

Cisco IOS’s shell is a popular interface for devices in the networking world. But also in the network world, there are a lot of Linux/Open Source fans. The Quagga open source project tries to bring together IOS and Linux, by providing an IOS-like interface for configuring Linux’s interfaces, routing table and firewall, along side its own implementations of RIP, OSPF and BGP daemons.

The Quagga Software Routing Suite comes as a set of daemos. The main one is the zerbra daemon (Zebra is the old name of the project). This core daemon does the interaction with the Linux kernel and, also, with other daemons like ripd (RIP daemon), ospfd (OSPF daemon), bgpd (BGP daoemon). Quagga is modular, so you can implement new protocols if needed via a standard API.

To configure Quagga, you first need to start the daemons (at least the core one), in the /etc/quagga/daemons file. Each daemon has its own configuration file (ex. /etc/quagga/zebra.conf, /etc/quagga/ripd.conf etc.). Accessing the IOS-like shell is done via the vtysh command. Once in this shell, most commands available in Cisco’s IOS are available.

Router / # cd
Router ~ # vtysh

Hello, this is Quagga (version 0.99.18).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

Router# conf t
Router(config)# hostname  LinuxRouter
LinuxRouter(config)# exit
LinuxRouter# show ?
bgp             BGP information
clns            clns network information
daemons         Show list of running daemons
debugging       State of each debugging option


Keep in mind that some things are not 100% identical to a Cisco router (ex. the interface names). Here’s an example of how to configure an interface.

LinuxRouter# conf t
LinuxRouter(config)# interface  eth0
LinuxRouter(config-if)# ip address ?
A.B.C.D/M  IP address (e.g.
LinuxRouter(config-if)# ip address
LinuxRouter(config-if)# link-detect

Monitor output (show commands) are similar aside some Linux specific details (ex. Kernel routes are available in Linux, but not in IOS).

Router# sh ip route
Codes: K – kernel route, C – connected, S – static, R – RIP, O – OSPF,
I – ISIS, B – BGP, > – selected route, * – FIB route

K * via, venet0 inactive
O [110/10] is directly connected, eth0, 00:03:41
C>* is directly connected, eth0
O [110/10] is directly connected, eth1, 00:03:36
C>* is directly connected, eth1
O>* [110/20] via, eth0, 00:02:46
O>* [110/20] via, eth0, 00:02:14
*via, eth1, 00:02:14
O>* [110/20] via, eth0, 00:02:41
O>* [110/30] via, eth0, 00:01:21
* via, eth1, 00:01:21
O>* [110/20] via, eth1, 00:02:08
C>* is directly connected, lo
C>* is directly connected, venet0
C>* is directly connected, venet0
K>* is directly connected, venet0

Configuring a routing protocol instance is also similar:

LinuxRouter# conf t
LinuxRouter(config)# router ospf
LinuxRouter(config-router)# network area 0

As you can see, coming from an IOS background, this tool is very easy to use on your Linux box. It is far from perfect since it doesn’t have the years in production like IOS or iproute2, but it is cool to test out.

[TechBlog] Stack Allocation

[Originally posted on]

Stack space is the part of each process’ virtual memory where function arguments and return addresses are stored, along with local variables declared within a function. Usually, the stack begins at the high address space of the virtual memory and grows down.

At every function call, a new stack frame is created on the stack. It contains the parameters sent to the function, the return address (the address of a code in the caller function) and the locally declared variables.

For each function call, the SP/ESP (Stack Pointer/Extended Stack Pointer) is set so the stack has a big enough size to accommodate local variables. For example, in theory, if you have a local char variable and an int variable, the SP should be set (moved) to 5 bytes.

In practice, the compiler will allocate stack space a little different than expected. It will allocate local variables space in increments of a fixed size, so sometimes having two int variables or three int variables will be the same.

As an example, gcc will allocate in increments of 16 bytes. Let’s make an experiment… we take a simple C program and turn into assembly code.

The C file looks something like this:

int main(void)
	int a=1, b=2;
	return 0;

The variables must be used after declaration or they will be ignored by the compiler.

The resulting assembly code (with an gcc -S) looks like this:

	pushl	%ebp
	movl	%esp, %ebp
	subl	$16, %esp
	movl	$1, -4(%ebp)
	movl	$2, -8(%ebp)
	movl	$0, %eax

Notice the subl instruction that clears 16 bytes in the stack space by decrementing the ESP. Those 16 bytes are enough for four 32bit integers. If you have 1,2,3 or 4 local variables declared (and used), you get those 16 bytes.

If we declare 5 integers, the allocated space will now be 32bytes. Same thing for 6, 7, or 8. If we have 9 to 12 integers the compiler will allocate 48 bytes. An so on…

What if we don’t only have integers? Let’s add some chars.

int main(void)
	int a=1, b=2;
	char c=3, d=4;


	pushl	%ebp
	movl	%esp, %ebp
	subl	$16, %esp
	movl	$1, -8(%ebp)
	movl	$2, -12(%ebp)
	movb	$3, -1(%ebp)
	movb	$4, -2(%ebp)
	movl	$0, %eax

The function would need 10 bytes, but still gets 16. So the allocation is in increments of 16 bytes no matter what.

The question remains why? It has to do with the cache alignment. The compiler will try to structure the memory usage so that the executed code can be easily fetched from memory and cached. A correct alignment will cause minimum cache misses for memory access.

Credits to SofiaN for help with initial observations and tests.

[CCIELab] Output manipulation in Cisco IOS

[Originally posted on]

Unlike Linux’s iptables, Cisco’s filtering via Access Control Lists sometimes has hidden behavior.

Let us test how ACL filtering works using the following topology. We assume that we have Layer 3 connectivity via static routes. We will apply ACLs on the outbound direction of F1/0 on R2 (we want it to be somewhere in the path from R1 to R3)


With no ACLs applied anywhere, all traffic will flow.

R1#ping source
Packet sent with a source address of
Success rate is 100 percent

Let’s start with the basics and make a classic standard access list that denies R1’s loopback.

R2(config)#access-list 42 deny host
R2(config)#int f1/0
R2(config-if)#ip access-group 42 out

The loopback on R1 is blocked…

R1#ping source
Success rate is 0 percent (0/5)

… but so is any other traffic that goes out of R2’s F1/0.

R1#ping source F0/0
Success rate is 0 percent (0/5)

The first rule of Cisco’s ACLs is that there is an implicit deny (ip) all (all) rule at the end of every ACL. But this is not visible anywhere. You have to know it.

R2#sh access-lists
Standard IP access list 42
10 deny (8 matches)
Extended IP access list BLOCK_HTTP

But if that ACL is empty? What if you apply an access list that does not contain any rules (was not declared)?

R2(config)#int f1/0
R2(config-if)#ip access-group 28 out
R2(config-if)#do sh access-lists
Standard IP access list 42
10 deny (8 matches)
Extended IP access list BLOCK_HTTP

R1#ping source

Type escape sequence to abort.
Success rate is 100 percent

Traffic passes. The inexistent ACL applied on an interface is ignored. But this is because you can’t have an empty classical (numbered) ACL. What if you do the same thing with a named ACL?

R2(config)#ip access-list standard EMPTY_ACL
R2(config)#do sh ip access-list
Standard IP access list 42
10 deny (8 matches)
Standard IP access list EMPTY_ACL
Extended IP access list BLOCK_HTTP
R2(config)#int f1/0
R2(config-if)#ip access-group EMPTY_ACL out

R1#ping source

Type escape sequence to abort.
Success rate is 100 percent

Traffic is still not filtered. So, the rule is that a empty (inexistant or deleted) ACL is ignored by the interface filter.

One more ACL applied on R2 with a deny all rule (no traffic should pass out of F1/0).

R2(config)#ip access-list standard DENY_ALL_ACL
R2(config-std-nacl)#deny any
R2(config-std-nacl)#do sh ip access
Standard IP access list 42
10 deny (8 matches)
Standard IP access list DENY_ALL_ACL
10 deny any (8 matches)
Standard IP access list EMPTY_ACL
10 deny any (8 matches)
Extended IP access list BLOCK_HTTP
R2(config-std-nacl)#int f1/0
R2(config-if)#ip access-group DENY_ALL_ACL out

Ping form R1 is filtered.

R1#ping source
Packet sent with a source address of
Success rate is 0 percent (0/5)

Since no traffic should go out the interface, a ping from R2 to R3 should also fail, yet it doesn’t.

Success rate is 100 percent (5/5), round-trip min/avg/max = 8/20/44 ms

As a final rule, traffic generated by a router is never filtered by an ACL applied any interface of that router.

ROSEdu Tech Blog

This fall, ROSEdu[1] introduced a new project: TechBlog [2]. Since we managed to gather a lot of technical-oriented in our  community, each having things to say about different technologies, we built a place where to share such knowledge in the form of a blog.

Here is my first contribution.

Rescuing executable code from a process [3]. Comments on reddit [4].

A process is an instance of a binary executable file. This means that when you ‘run’ a binary, the code from the storage media is copied into the system’s memory, more precisely, into the process’ virtual memory space. From a single binary, several processes can be spawned.

The virtual memory of a process, made up of pages, is mapped to several things, like shared objects(libraries), shared memory, stack and heap space, read-only space and executable space. A good way to view what is mapped to what is with the pmap utility, or by just looking in the /proc directory hierarchy. The /proc/$PID/maps file (where $PID is the process ID of the targeted process) has the page mappings. Also in /proc/$PID, you can find other useful files, like the exe file that contains a symlink to the executable or the fd directory that contains symlinks to all the files opened as file descriptors in a process.

Except useful information, what can we get out of the procfs? Here is a situation that has been known to happen. You are in a console, with your bash shell, and you manage to delete some important files, like /bin/bash. Without that executable, you cannot run new shells and on a restart, your system will be inaccessible. What can you do?

The code of your bash is no longer on the hard drive, but it is in the virtual memory of the process you are currently running. You can find out what’s the PID of the current shell instance using $$ enviroment variable . Knowing that, you can cd to the /proc/$$ and access the content of the exe file there.

Although the exe file is shown as a link to the original file that is now deleted (thus the link should be broken), if you cat it, you will get its binary content. In fact, all the original binary file. Here is the step by step process:

/bin # md5sum bash
e116963c760727bf9067e1cb96bbf7d3  bash
/bin # rm bash
/bin # echo $$
/bin # cd /proc/$$
/proc/5051 # ls -la exe
lrwxrwxrwx 1 root root 0 2011-11-15 23:47 exe -> /bin/bash (deleted)
/proc/5051 # cat maps
00f9e000-00f9f000 rw-p 0001c000 08:01 263123     /lib/i386-linux-gnu/
08048000-0810c000 r-xp 00000000 08:01 284760     /bin/bash (deleted)
0810c000-0810d000 r--p 000c3000 08:01 284760     /bin/bash (deleted)
0810d000-08112000 rw-p 000c4000 08:01 284760     /bin/bash (deleted)

/proc/5051 # cat exe>/bin/bash_rescued
/proc/5051 # cd -
/bin # md5sum bash_rescued
e116963c760727bf9067e1cb96bbf7d3  bash_rescued
/bin # chmod +x bash_rescured
/bin # mv bash_rescured bash

What other things can we rescue? How about a file that was opened by a process? For example, a video file, opened by a player:

alexj@hathor ~ $ md5sum movie.ogv
9f701e645fd55e1ae8d35b7671002881  movie.ogv
alexj@hathor ~ $ vlc movie.ogv &
[1] 6487
alexj@hathor ~ $ cd /proc/6487/fd
alexj@hathor /proc/6487/fd $ ls -la |grep movie
lr-x------ 1 alexj alexj 64 2011-11-16 00:11 23 -> /home/alexj/movie.ogv
alexj@hathor /proc/6487/fd $ rm /home/alexj/movie.ogv
alexj@hathor /proc/6487/fd $ ls -la |grep movie
lr-x------ 1 alexj alexj 64 2011-11-16 00:11 23 -> /home/alexj/movie.ogv (deleted)
alexj@hathor /proc/6487/fd $ cp 23 /home/alexj/movie_rescued.ogv
alexj@hathor /proc/6487/fd $ md5sum /home/alexj/movie_rescued.ogv
9f701e645fd55e1ae8d35b7671002881  /home/alexj/movie_rescued.ogv

These things are possible because the instances of the files are still kept and used by the kernel. The VFS (the Virtual File System) still has references to the inodes of the files. They won’t be released until the processes will be finished.




[Personal] Tales from the States: The end

Cele 3 luni în SUA au ajuns la sfârșit. A fost o experiență interesantă. Aș putea trage linie și face câteva constatări despre cei din State.

Primul lucru pe care l-aș afirma, este faptul că toate clișeele pe care le vezi în filme sunt adevarate. De la casele cu iarbă verde în față și fără gard sau cu un gard foarte mic, la barurile în still western (mai puțin ușile rabatabile), de la surfer-ii de pe plajele de la ocean la cluburile cu covor roșu, de la liniștea din suburbii, la algomerația din marile orașe.

Altă observație este că totul în America este mare: mașini mari, porții de mâncare mari, distanțe mari. Este ceva ce face o diferență foarte mare față de Europa. Cumva este simbolic faptul o milă este valoric mai mult decât un kilometru și cum totul este la distanță mare, viața unui american este diferită de cel a unui european. Mașina este ceva absolut necesar și nu este nimic neobișnuit ca locul de muncă să fie la 50 mile distanță de casa ta. Magazine mici, de cartier nu sunt foarte multe și mare parte din cumpărături le faci de la centrele comerciale din oraș. Cum toată lumea are mașină, benzina este mai ieftină, dar, de fapt, este doar o iluzie, pentru că prețul este mai mic, calitatea este mai proastă (cifra octanică este de 75-85, 90 fiind deja de putere înaltă); acest lucru duce la nevoia de un motor mai mare (2 litrii este un motor foarte mic) și la mașini mari. De asemenea, cutiea de viteze manuală este o raritate mare. Și dacă nu era deja evidentă importanța unei mașini, faptul că ei nu au buletin sau carte de identitate și singurul lor document oficial este carnetul de conduere, zice tot. Dar un efect pozitiv este faptul că autostrăzile lor sunt peste tot și în unele intersecții, din punct de vedere ingineresc și arhitectural, incredibile (cu poduri suspendate peste poduri sustendate).

Pentru un european (mai puțin dacă ești din UK), un prim stres cred că este întotdeauna folosirea unităților imperiale folosite. Mila, foot-ul, yard-ul cu valorile lor necunoscute, în general, nici de localnici. Gradele cu F în loc de C și formatul mai neintuitiv de lună/zi/an.

Dar mi se pare că este o mare diferență din punct de vedere turistic între Statele Unite și Europa. În Europa, ești obișnuit să vizitezi castele, biserici și monumente. În State, lucrurile sunt prea noi pentru așaceva. Dar Statele au ceva foarte frumos: natura. Faptul că distanțele între orașe sunt mari, înseamnă că este loc între ele de lucruri de văzut. Parcurile lor naturale sunt foarte frumoase. Dacă Grand Canyon nu este destul de incredibil, mai sunt Yosemite și Sequoia (pe care, din păcate, nu le-am văzut) sau alte sute păduri întregi de redwoods, copacii giganți. De-a lungul costei californiene, autostada California 1 îți oferă o priveliște foarte interesantă: pe o parte, în imediata apropiere, ai oceanul și plajele sale și pe cealaltă parte, la fel de aproape, ai munți și păurile de conifere. Big Sur este un loc foarte interesant pentru că zici că vrei să mergi la plajă, dar autostrada te tot urcă într-o zonă înaltă de munte, până te pierzi în pădurea deasă; dacă vrei să ajungi la plajă, trebuie să cobori pe un drum forestier care se deschide, dintr-o dată spre o plajă cu nisip, protejată într-un semicerc de stânci.

Valea Napa era o zonă recomandat de vizitat pentru vinurile sale. O mică Frața a Californiei de Nord, dealurile acestei regiuni sunt pline de vițe de vie care dau struguri din care se produc niște vinuri recunoscute ca fiind de o calitate foarte bună. Deși nu te-ai aștepta la o zonă prielnică pentru culturi de struguri având în considerare că la câteva zeci de mile este o zonă mai mult deșertică, Napa are norocul de a atrage umiditatea și precipitațiile oceanice din San Francisco Bay. Noi am fost la o vinărie numită Hess, care este cea mai veche din Napa, deschisă înainte de perioada Prohibiției.

Dar pe lângă părțile bune, există și anumite părți mai urâte. Probabil cel mai enervant lucru (pentru mine, cel puțin) a fost numărul ridicat de zâmbete false pe fața

oamenilor care încercau să îți vândă lucuri. Nu este un lucru rău să fii prietenos, dar americanii încercă să pară atât de prietenoși pentru a le cumpăra lucurile încât este deranjant. Este un grad de ipocrizie pentru că atunci când ajung acasă, nu vor ca nimeni să le invadeze spațiul lor (noi locuiam lângă un loc care avea la intrare o serie de semne mari cu “No trespassing”). Dar poate că nu era neapărat un lucru rău că lumea zicea “mulțumesc” (ok, “thank you”) pentru orice lucru. Mâncarea, era destul de diferită pentru un european și a zice eu, mai rea. Totul era forte dulce sau condimentat și, evident, mare.

Sunt, probabil, multe lucruri pe care le-am ratat în această serie de articole, pentru că au fost multe lucruri. Trei luni au fost destul de mult, dar tot nu suficient pentru a face tot ce era de făcut. De la San Franciso la Los Angeles, de la Oceanul Pacific până la munții Sierra Nevada și mai încolo, la deșerul Mojave, California a oferit multe atracții. It was fun while it lasted.

O propoziție dintr-un discurs plin de sfaturi de viață (discurs ce a fost transformat și într-o melodie) zicea “Live in Northern California once, but leave before it makes you soft”. Îmi pare bine că am reușit să am această experiență. Cuvinte anterioare erau “Live in New York City once, but leave before it makes you hard”. Ar fi frumos dacă aș avea și ocazia să conosc și costa de est.

Mi-am luat rămas bun de la Mountain View și San Francisco, de la US-101 și de la California și am părăsit continentul american îndreptându-mă spre Amsterdam, la aeroportul Schipol, unde m-am întors la prețurile europene și apoi acasă, în România.

[Personal] San Francisco

San Fancisco este cel mai apropiat oraș mare de Mountain View, drept consecință, un loc pe care l-am vizitat des… foarte des…

Cel mai ușor mod în care poți ajunge în SF este mergând cu Caltrain-ul. Acest tren circulă pe un traseu între San Francisco și San Jose, având opriri în orașele importante din Silicon Valley. Din MTV până în SF, faci aproximativ o oră mergând cu Caltrain-ul, care circulă cam din oră în oră, de pe la 7 dimineața până pe la 12 noaptea. Primul vagon este rezervat pentru biciclete, și este recomandat să ai o biciletă cu care să mergi în oraș. De asemenea, este întotdeauna recomandat să te îmbraci gros când vii în SF pentru că vântul este ceva aproape permanent.

Este un oraș mare, bineînțeles cu un centru cu clădiri înalte și o suprafață foarte mare datorită zonei metropolitane din jur. Stația de Caltrain din San Fancisco te lasă într-o zonă destul de centrală din punct de vedere turistic. În apropiere este o stradă numită Embarcadero care merge pe conturul nord estic al orașului ce este mărginit de San Francisco Bay. Zona aceasta este, după cum îi zice și numele străzii, portul (vechi, dar încă funcțional) al orașului.

În partea estică a Embarcadero se află AT&T Park, care este stadionul echipei de baseball locale, The San Francisco Giants. Este o arenă mare și specială pentru ca este fix pe marginea apei. O lovitură (foarte) puternică din stadion ar arunca o minge în apă. Este o zonă foarte aglomerată când Giants au meci, deoarece americanii vin cu toată familia la acest spectacol. Un meci de baseball nu este doar un simplu meci, ci un eveniment întreg pentru spectatori, care sunt foarte apropiați de echipa lor.

Portul începe la Bay Bridge, cu Ferry Building, unde era punctul de control pentru bacurile din Bay. Clădirea iese în evidență prin turnul său cu ceas, care a supraviețuit cutremurilor din oraș. Acum este folosită ca o clădire de magazine. “Pier 39” este un loc mai turistic, loc în care se află și acvariumul. Este în apropierea Fisherman’s Wharf, unde poti merge să vezi focile. Pontoanele sunt mari și mici și găzduiesc vase de la bărci de pescuit la nave de coazieră pe ocean. Magazine de suveniruri și multe restaurante cu specialități din pește. Un prânz sau o cină aici oferă o masă bună alăguri o priveliște foarte frumoasă. În zonă, multe firme oferă tuturi pe Bay, spre Alcatraz și Angel Island.

Insula Alcatraz este unul dintre principalele atracții alte orașului. Această stâncă (numită chiar “The Rock”) are o istorie destul de încărcată. A pornit ca o fortăreață militară, ale cărei tunuri apărau intrarea în San Francisco Bay, apoi a fost transformată în închisoare militară în timpul Războiului Civil American. În 1933, insula a devenit închisoare federală, lucru pentru care a rămas cunoscută în istorie. Datorită locației sale, Alcatraz era perfectă ca închisoare: dacă celulele și zidurile interne nu te blocau pe insulă, apa rece și plină de curenți puternici te țineau departe ce civilizație. Oficial, nici un om nu a evadat din Alcatraz, cu excepția cazului Frank Morris și frații Anglin, care se zvonește că au supraviețui încercarea lor periculoasă de evadare. La Alcatraz au fost trimiși unii dintre cei mai periculoși criminali ai Americii, printre care Robert “The Birdman” Stroud, psihopatul cu un IQ de 134 și Al Capone, gangsterul din timpul prohibiției. O sentință la Alcatraz era una oribilă… dacă faptul că ești închis alături de cei mai periculoși oameni sau condițiile de acolo nu erau destul, trebuia să trăiești cu faptul că aveai o priveliște asupra San Francisco-ului animat, aflat la doar câteva mile apropierem peste apă, dar, totuși, la o distanță imposibilă. Este un loc foarte ciudat să mergi ca turist, dar locul este amenajat acum în așa fel încât să îți dea o imagine foarte bună a ce însemna închisoarea Alcatraz. Închisoarea a fost închisă în 1963, când s-a decis că sistemul de penitenciare din America trebuie să treacă de la unul punitiv la unul corecțional. Ultima bucată de istorie a Alcatraz aparține Ocupației indiene a insulei din 1969-1971, când mai multe triburi de indieni americani au invadat insula în semn de protest pentru felul în care erau tratați de guventul Statelor Unite.

Dacă mergi cu mașina spre SF, faci cam tot oră, ~60 mile, mergând pe US 101, drumul ce traversează SUA de la nord la sud. US 101 este, în general, freeway, dar în unele situații, este doar un drum printr-un oraș (un fel de E85 în Europa). Mergând pe 101 în SF spre nord ajungi la simbolul orașului, podul Golden Gate. Dacă nu este acoperit de ceață (ceea ce se întâmplă extrem de des) poți să vezi un mostru de metal ce unește cele două maluri despărțite de San Francisco Bay. Imediat la nord de GG Bridge este un vista point, de unde poți vedea oașul, Alcatraz și Angel Island și oceanul vast. Imediat ce ieși la nord de Golden Gate, începe o regiune care pare mai mult de munte decât metropolitană. În apropiere se găsește parcul național Muir Woods, cu o colecție de redwoods, copaci înalți de zeci de metri și de sute de ani vechime.

În apropierea întrării la Golden Gate Bridge, se găsește Golden Gate Park, un parc ce acoperă un cartier înteg din oraș. SF nu este un oraș foarte gălăgios, dar parcul este un loc foarte bun de făcut plimbari de relaxare, jogging și picnicuri. În acest parc se află Academy of Science, un muzeu de ștințe ale naturii. Este un loc în care orice elev de gimnaziu ar trebui să îl vadă. Conține un planetarium, sub forma unei sfere ce se întinde pe 3 etaje. O alta sferă, este o pădure tropicală închisă, în care poți să urci într-o spirală în jurul copacilor înalți, simțindu-te ca într-o pădure adevărată, inclusiv cu caldura și umiditatea și păsări și insecte.

Un lucru ce definește San Francisco sunt dealurile din oraș și străzile foarte abrupte, multe dintre ele aproiindu-se de unghi de 45 de grade. Cea mai cunoscută stradă cu unghi amețitor este Lombart. Acestă stradă ar fi fost imposibil să fie urcată sau coborâtă drept, așa că este făcută șerpuită. Are un singur sens, putând doar să cobori pe ea, și interiorul curbelor este umplut e grădini cu flori. Strada este una dintre cele mai vechi și nu este asfaltată ci are pavele. Este un test foarte interesant pentru șoferii pasionați. Tot specific SF și datorat dealurilor sunt cable carts. Echivalentul tranvaelor (și trase acum 100 de ani de cai), nu pot circula cu propria putere pe șine datorită unghiului străzii, așa că sunt trase în sus cu ajutorul unor lanțuri ce se găsesc în pământ, asemănător cu un funicular.

Ca locuitor al orașului, stai în niște blocuri de doar câteva etaje, deoarece relieful nu permite clădiri prea înalte (mai puțin în centru, pe Market Streat, unde se găsesc mare parte din clădirile de birouri). Am avut ocazia să intru într-un apartament din SF și am fost surprins să văd cum arată înăintru. Totul este foarte înghesuit și spațiile lunt mici. Dar ce este foarte frumos, majoritatea blocurilor permit locatarilor accesul pe acoperiș. Aici, chiar dacă este foarte des un vânt puternic ce bate, oamenii au scaune și mese afară și un grătar pregătit. Priveliștea este una foarte frumoasă pentru că dacă ești pe o colină, poți vedea mare parte din oraș, podul Golden Gate și San Francisco Bay.

San Fancisco este un oraș frumos de vizitat și de locuit. Este mare, dar destul de liniștit. Ai în apropiere cam tot ce ai avea nevoie, de la magazine și firme mari, la parcuri, o plajă lângă ocean sau o pădure de munte. Dacă Los Angeles este inima Californiei de Sud, San Francisco este inima Californiei de Nord.

[Personal] Driving down the US Highways (III) – Death Valley

Deși nu era în planul inițial, direcția pe ziua de luni nu a fost acasă, ci un obiectiv mai îndrăzneț, un loc numit Death Valley. După cum sugerează numele, locul este o zonă foarte neprietenoasă din punct de vedere a climei, putând ușor să îl numim si altfel: Iadul. Această vale are o caracteristică foarte interesantă: este sub nivelul mării, aici găsindu-se cel mai jos punct aflat pe pământ din America continentală. Acesta este și motivul pentru care zona este foarte caldă. Pentru a ajunge la el, am părăsit statul Nevada și am intrat iar în California.

La fel cum mare parte din Grand Canyon este un parc național, așa și o parte din Death Valley face parte din Death Valley National Park, cu o taxă de intrare de 20$ pentru mașină (taxă care merită, având în considerare că ai o stradă foarte bine asfaltată în mijlocul pustietății și indicații turistice). Dupa ce am intrat în parc, am început să coborâm în altitudine cu fiecare milă condusă. Vedeam cum GPS-ul și semnele de pe marginea drumului indicau că am coborât sub 0 metri altitudine. Căldura devenea infernală. Dacă scoteai mâna pe geamul mașinii, chiar dacă mergeai cu 100 mile pe oră, simțeai ca și cum bagi mâna în flăcările unui foc. În jur nu era nimic, doar șoseaua în fața ta și pietrișul din dreapta și stânga, fără pic de vegetație.

Prima oprire a fost la Zabriskie Point, un deal pe care vedeai partea muntoasă a zonei. În acei munți, acum 100 de ani existau mine ce extrăgeau resurse minerale din vale, cea mai exploatată fiind boraxul. Deși din cauza condițiilor în care trebuia să faci extracția și transportul, nu părea că merită efortul. Din acel punct am început să coborâm spre un loc numit The Devil’s Golf Course. Dacă este vreo imagine a cum arată Iadul, aceasta este. O suprafața foarte mare de pământ acoperită cu niște pietre foarte ascuțite, dar care, dacă te apropi de ele, descoperi că sunt, de fapt, bucăți mari de sare. Găuri în pământ încă există, pe unde apa cu sare a ajuns la suprafață. Datorită căldurii, apa s-a evaportat, rămânănd doar sarea la suprafață.

Destinația finală din parc a fost un loc numit Badwater. O suprafață cu o rază de câteva mile, complet plată, acoperită de un strat gros de sare. Dacă te uitai în jos și ignorai călura de afară, puteai să juri că este zăpadă. Puteai să și desenezi în “zăpadă”. Este de inimaginat cum ar putea exista viață în acest loc, dar într-un colț al Badwater există o mică baltă (pentru că nu o pot numi lac) în care găseai niște insecte. De la acest lac, dacă te uitai în spate, se vedea un munte și pe versantul său, la câteva zeci de metri în sus, vedeai un semn mare zicea “Sea Level”, noi fiind la 86 metri sub nivelul mării. This is as close to Hell as you’ll ever be.

În drum spre ieșirea din parc am mai oprit la un loc de unde am făcut o foarte scurtă plimbare, urcând printr-un canion spre un loc numit Natural Bridge, care era, intuitiv, un pod dintr-o stâncă între doi versanți. Am luat un drum numit Artist’s Road care urcat și coborât pe dealuri din Death Valley. Dacă pustiul de până acum nu ni s-a părut deșert destul, fix la ieșirea din parc am văzut dune de nisip. Am ieșit să punem mâna pe nisip, care era relativ rece la suprafață din cauza vântului, dar dacă săpai câțiva centimetri în pământ, ajungeai la nisip fierbinte, pentru că toată căldura din atmosferă era reținuta în pâmânt.

Am ieșit din parc, dar nu și din Death Valley. Am început să urcăm foarte mult, ajungând pe niște munți destul de înalți de unde vedeai toată valea. Ieșind din zona Death Valley, începeam să vedem peisaje mai apropiate de ce erau în California. Restul drumului de aici încolo însemna doar terminat cele câteva sute de mile rămase pe autostrăzi. Drumul a fost frumos. Driving down the Lost Highways of the US. Ne-am îndreptat spre munții Sierra Nevada, cei mai înalți de pe continentul american. Se întuneca și am prins apusul deasupra munților în vest dar, pentru că trebuia să trecem peste ei, apusul a ținut foarte mult pentru că l-am prins și în spatele lor. Am trecut pe lânga Lacul Isabelle, în apropierea Sequoia National Park, lac format datorita unui baraj pe un râu ce curge din Sierra Nevada.De la Bakersfield am continuat spre I-5, apoi pe US-101 spre Mountain View, unde am ajuns la miezul nopții.

Mașina pe care o închiriasem avea la bord aproximativ 1000 mile când am plecat. Am returnat-o cu aproape 3000 de mile și cu o întreagă aventură prin deșert drept trecutul ei și al nostru.

[Personal] Driving down the US Highways (II) – Las Vegas

Duminică dimineață ne-am trezit, am stâns cortul și am pornit iar la drum, cu destinația finală Las Vegas, dar cu opriri pe drum. Înainte să ieșim din Grand Canyon National Park am oprit la ruinile Tusayan, ruinile unui pueblo, o așezare a unor indigeni din zona accea, indienii Hopi. Doamna de la muzeul de lângă runine ne-a povestit despre indienii din sud vestul Statelor Unite, despre Hopi, Apache și Navaho. Ieșind din parc, am continuat să vedem imagini incredibile ale canionului care se extinde mult în estul parcului. Am continuat spre un alt parc ce era o rezervație de indieni (indienii Sinagua), Wupatki National Monument. Aici am văzut alte ruine de pueblo (termenul se referă și la cultura Pueblo), mai mari și mai întregi. Eram deja într-o zonă total deșertică, unde nu era vreo așezare pe o rază de zeci de mile. În afară de vegetația tipică de deșert, nu vedeai vreun copac care să ofere umbră. Pământul era foarte neprietenos și este greu de imaginat cum oamenii trăiau acolo, dar existau comunități de zeci și chiar sute de oameni în acea zonă. Am văzut un fel de cetate, comparabilă cu o cetate mică medievală, doar că în loc să fie gri, avea culoarea roșu aprins, intregrându-se cu deșertul. Mergând în continuare câteva zeci de mile prin acest parc ajungem la o zonă mai de munte. Intram în Sunset Crater Volcano National Monument. Muntele respectiv era un vulcan și erau zone unde vedeai urmele lavei de la erupție. Ieșind din parc pe partea cealaltă a muntelui era deja complet altă lume: verdeață și copaci. Surpriza mai mare a fost când a început să plouă cu stropi mari și deși, atunci când cu jumătate de oră înainte eram într-un desert unde singura apă era cea din sticlele nostre.

Ne-am continuat drumul prin Arizona până la un alt parc national, Walnut Canyon National Monument, aflat lângă un oraș numit Flagstaff. Tot o așeazare de indinei Sinagua, era foarte intersant pentru că într-o zonă montană, pe valea unui râu, unde triburile de indieni și-au construit case în versantul muntelui. Zidurile caselor lor au rezistat aproape 1000 de ani în acest munte și puteai să vezi cum își trăiau vitața. Casele erau la câteva sute de metri deasupra răului de care depindeau, pe niste pante aproape verticale. Un drum pentru turiști care te ducea în jurul văii unde erau așezările nu se compara cu ce trebuiau indineii să facă pentru a circula prin orașul lor. Casele erau foarte inginerește gândite, fiecare familie având o cameră unde își putea face un foc, protejat de ploaie pentru că aveau stânca deasupra lor și protejată de frig și căldura datorită zidurilor din piatră și lut.

Din Flagstaff am plecat pe I-40 spre Las Vegas. Autostrada mergea paralel (sau uneori coincidea) cu faimoasa Route 66. După ceva ore de mers, am ajuns la Hoover Dam, faimosul baraj de pe Râul Colorado. Am trecut podul ce lega Arizona și Nevada și am oprit pe partea din Nevada a barajului într-o căldură infernală (chiar dacă era 6 seara). Complexul construit din beton era încins de soarele de afară și era ca și cum intrai într-o sobă.

Boulder Dam, cum se numea atunci când președintele Hoover a ordonat construcția barajului în 1931, în timpul Marii Depresii, a fost deschis în 1936 de F.D. Roosavelt, care a schimbat numele în Hoover Dam. Construcția este imensă, înălțimea barajului fiind comparabilă cu adâncimea Grand Canyon. Pereții groși de beton trebuie să țină în spate puterea răului Colorado, al cărei apă este oprită în lacul de acumulare Lake Mead. Cele 4 turbine ale centralei hidroelecrice sunt împărțite între cele două state vecine, mijlocul barajului fiind și punctul de graniță între Arizona și Nevada. Lângă baraj, se află un monument dedidat Statelor Unite și inginerilor americani care au construit Hoover Dam, care atestă minunăția inginerească a proiectului.

Pe la 20:00 am ajuns în Las Vegas. Las Vegas a fost exact te așteptai: orașul cazinourilor, deși nu stilat cum este Monte Carlo. Căldura era incredibilă, chiar și seara/noapea. După ce ne-am cazat la motel, am vrut să mergem spre centru, trebuind să traversăm o autostradă… am mers pe jos PE autostradă alături de mașinile în viteză. Am mers pe strada principală, Las Vegas Strip, unde erau cam toate hotel-cazinourile faimoase: Luxor (care era o piramidă), MGM Grand, Mirage, Caesar’s Palace, The Bellagio. The New York Casino avea un montagne rouse în care puteai să te dai. Casinoruile erau… cazinouri. Cel mai frumos (și singurul) lucru văzut a fost Spectacolul Fântânilor de la Bellagio. La fiecare jumătate de oră ziua, sau la fiecare sfert de oră seara, era pornită o melodie în boxele din fața hotelului ce avea o fântână imensă și jeturi de apă erau aruncate în aer în ton cu acordurile melodiei. Apa ajungea la zeci de metri în aer și alături de efecte de lumini făceau un spectacol pe cinste.

Dar per total, Las Vegas a fost destul de neimpresionant (mult mai neimpresionant decât Los Angeles). Luni dimineața am plecat de la motel, după ce am pierdut 2$ la un joc de ruletă într-un singur tur (nu puteam să plec din Las Vegas fără să joc măcar un joc). Orașul ne-a dat afară cu cineva neașteptat pentru căldura de afară: cu ploaie.

[Personal] Driving down the US Highways (I) – The Grand Canyon

După ce am stat aproape 3 luni de zile în California, de weekend-ul prelungit de Labor Day (5 septembrie) am zis să facem un drum mai lung: Grand Canyon,Arizona și Las Vegas, Nevada. Nu am luat avionul, ci am ales o variantă mai interesantă: un drum de 15 ore (într-un sens) cu mașina pe drumurile din SUA. Planul era să plecăm de vineri seara, condus toată noaptea până în Las Vegas, apoi imediat spre Grand Canyon National Park, dormit într-un camping acolo, apoi să ne întoarcem spre Las Vegas să viztăm orașul. În practică, a decurs puțin mai diferit.

Am plecat pe la ora 18:00 din Mountain View pe preferata noastră autostradă, US 101, apoi spre Interstate I-5 în sud. Ieșind de pe I-5 a trebuit să mergem pe niște drumuri echivalente a drumuri naționale, care ne-au dus prin niște orășele mai mici, care, pe la ora 2 dimineața păreau începutul unui film horror. Am intrat pe I-15 și eram deja în drum spre Las Vegas. Deja eram de mult într-o zonă deșertică. Aproape că se făcea zi când am intrat în statul Nevada și primul oraș al statului era unul foarte luminat de neoane. Ne miram pentru că era prea mic să fie Las Vegas, și s-a dovedit că nu era… mai aveam câteva mile de mers spre est. În orizont vedeam o lumină puternică, dar nu ne dădeam seama dacă este Las Vegas sau soarele care răsare. Apropiindu-ne de oraș, am văzut un cartier care ziceai că luminile de la clădiri erau niște flori pe o pajiște. Încă eram pe autostradă și treceam pe lângă hoteluri și casinouri ce nu se mai terminau. Nu îți dădeai seama că este nopate pentru neoanele luminau tot.

După ce am oprit să luăm micul dejun sub forma unui hot dog și să realimentăm mașina, deja răsărea soarele și ne-am îndreptat spre destinația finală a zilei: Grand Canyon. Ne-am îndreptat spre Bolder City și Hoover Dam. Am trecut un pod de lângă baraj și am intrat în Arizona. Am mers în continuare pe niște autostrăzi prin mijlocul pustietății, exact cum ai fi văzut în filme. Căldura de la 9 dimineață confirma: suntem în Middle of Nowhere, Arizona. Pe la prânz am ajuns în Grand Canyon National Park. Drumul de 15 ore planificat s-a transformat în 18 ore. O taxă de 25$ îți permitea intrarea în rezervație pentru 7 zile.

Nu era chiar ce te așteptai… nu era doar un deșert care să se termine într-o prăpastie fără să găsești oameni sau case în zonă. Canionul, în regiunea aceea (South Rim al Grand Canyon), era înconjuratat de o regiune foarte mare de păduri. Centrul parcului era Grand Canyon Village, unde erau multe magazine, inclusiv un supermarket de mărimea Carrefour și o cantină-restaurant unde aveai WiFi (aveai WiFi, dar nu aveai GSM). Prima oprire a fost la Yavapai Geology Museum, unde ne-a întâmpinat un Ranger ce ne-a povestit despre fauna din parc. În muzeu unde am văzut o machetă a întregului canion. Dar mai important, a fost prima priveliște asupra Grand Canyon-ului. “WOW” probabil sunt singurele lucruri pe care poți să le zici. It is breathtaking. Doar dimnensiunea și adâncimea sunt incredibile, și dacă nu este destul, privelistele sunt unice.

Exista mai multe zone în care se pot face drumeții pe marginea Grand Canyon. Este și un drum care te duce în jos, până la râul Colorado, dar acel drum îți ia două zile, nu neapărat datorită distanței, ci datorită faptului că nu poți compensa într-o singură zi pentru cât transpiri pe drum încât să nu mori. Am ales să mergem pe un drum mai mic, dar care ne ducea în mai multe puncte de unde să vedem canionul. Nu aveam voie cu mașina, dar, iar ceva ce nu te-ai aștepta, parcul oferă o serie de autobuze ce vin cam din 10 în 10 minute și de duc între diversele puncte ale drumului. Punctele de oprire ofereau panorame uimitoare. Pur și simplu nu puteai să percepi distanța până la baza canionului. Și era de neimaginat cum a reușit râul Colorado și câteva milioane de ani (aia da răbdare) a reușit să modeleze relieful în acel fel. În unele dintre locuri puteai să vezi râul Colorado, abia distinctibil datorită culorii sale “murdare”. Era, probabil, cel mai mare vis al unui geolog, deoarece puteai să vezi un număr foarte mare de tipuri de rocă și evoluția lor pe straturi. Cel mai din vest punct al parcului a fost ultima oprire a drumeției (am trișat și am luat autobuzul ultimele stații) se numea Hermits Rest, unde era o așezare veche, dar care a fost reamenajată ca magazin de suveniruri. Stând în acel punct la marginea canionului ziceai că stai pe marginea lumii.

Ne-am întors la mașină și ne-am dus spre punctul cel mai din est al parcului, Desert Point, la un camping. După ce ne-am rezervat un loc în caming (a costat 12$) și ne-am instalat cortul, ne-am întors la Grand Canyon Village să luăm mâncare, bere și lemne de foc. Am făcut un foc de caming și am încheiat ziua de sâmbătă. Eram deja nedormit de vreo 36 ore. And the trip was just beginning.

[Personal] Los Angeles

Fiind stabiliți în nordulul Californiei, LA este destul de departe. Dar am profitat de weekend-ul prelungit de 4 iulie (mulțumită Google care a dat luni și marți liber, nu doar luni de 4 iulie) și am făcut un drum pâna în Los Angeles.

A închiria o mașină pentru un weekend este destul de ieftin, așa că am luat una și am pornit la drum sâmbătă dimineața. Drumul spre LA este unul destul de lung, dar frumos. Cu cât ne depărtam de nord, se făcea mai cald și peisajul se schimba din verdele copacilor în galbenul deșertului (să nu exagerăm, nu era chiar deșert, dar era o destul de pustiu). Drumul ne-a dus pe coasta oceanului și l-am putut vedea mare parte din drum. Am oprit și la o plajă, Prismo Beach, să admirăm priveliștea și surferii californieni. Apropiindu-ne de Los Angeles, începeai să vezi de departe zgârie norii ce ieșeau ca niște țepi din mijlocul orașului (probabil așa se vedeau castelele medievale ieșind în evidență față de satele ce le înconjurau).

Am ajuns în LA sâmbătă seara, și cum eram cazați undeva lângă Hollywood Boulevard, am ajuns într-o zonă destul de animtă (melodia Welcome to the jungle – Guns’n’Roses mi-a venit în minte). Orașul în acea seară era parcă scoasă din film: mergând pe Hollywood strip, vedeai cluburi cu paznici mari la ușă ce păzeau intrarea de o coadă de 50 de oameni, pe stradă mașini scumpe și/sau tunate, ce intrau în contrast cu vagabonzii de pe marginea străzii, și mai vedeai limuzine ce opreau, deschideau ușile și invitau grupul de fetele care ieșiseră bete din club (true story 😉 ). Welcome to the city of angels!

Duminică a fost ziua în care am vizitat LA-ul. Prima oprire a fost, dimineața, la faimoasa Venice Beach, o plajă imensă. Din păcate, norii ne-au impiedicat să facem baie/plajă așa că am fost la o plimbare în zonă. O stradăduță foarte aglomerată mergea paralel malul oceanului, fiind plină de magazine pe o parte și mese ale vânzătorilor ambulanți pe cealaltă. Decorul era destul de tipic pentru o plajă la mare.  Mirosul de ‘iarbă’ sau de alte parfumuri aromatice te urmăreau peste tot.

Următoarea destinație a fost undeva în afara orașului, la un loc numit Getty, ce se afla undeva pe un deal. Am lăsat mașina la baza muntelui și am urcat cu un funicular automatizat până la clădirea principală din vârf. Getty este un muzeu de artă. Deși nu la fel de mare ca Luvrul, este mare. Arta americană fiind cam inexistentă, muzeul era umplut cu picturi din Europa, din toate erele. Toate operele erau indexate de Google Goggles așa că puteai să afli informații despre fiecare obiect de artă de telefon. Dar ce a fost impresionant despre Getty nu a fost neapărat conținutul său, ci Getty in sine, care era o clădire cu o arhitectură foarte frumoastă și modernă, Zidurile masive alte clădirii erau inconjurate de grădini vezi. Datorită poziției sale, clădirea îți oferea o panoramă de sus a LA-ului. Getty este un punct necesar de văzut dacă ajungeți în Los Angeles.

Mai spre seară am merg în Downtown LA să vedem centrul și zgârie norii. Am luat metroul… rețeaua de metrouri din LA este imensă și am văzut niște stații de metrou foarte frumos decorate (exemplu cea de la Hollywood avea tot tavanul decorat cu role de film). Fiind duminică, era pustiu. Stăteam la baza clădirilor imense ale diverselor bănci și companii mari. Ziceai că ești într-un labirint și abia că vedeai cerul. Am mers spre China town, unde am mâncat într-un restaurant chinezesc.

Luni era 4 iulie și ne făceam planuri unde să vedem artificiile din seara respectivă. Ziua ne-am petrecut-o pe lângă Hollywood strip, în primul rând căutând un loc în care să vedem Hollywood sign (loc pe care nu l-am găsit). Pe Hollywood Boulevard   în zona Kodak theater era foarte aglomerat. Am mers și am numarat stelele. Numele de pe bulevard aparțineau actorilor, regizorilor și muzicienilor. vedete de radio, TV, film și teatru, dar foarte multe de care nu auzisem (probabil pentru că eram prea tineri).

Seara am petrecut-o într-un campus universitar, la Universitatea California de Sud, unde am stat și am văzut spectacolul de fix o oră de ziua Independenței, A fost un show extraordinar.

A doua zi ne-am întreptat spre casă, ieșind prin Beverly Hills, ultima oprire fiind la campusul UCLA. În drum spre Mountain View am mai oprit la Santa Barbara unde am vizitat orașul și plaja.

[Personal] Mountain View, CA

De la mijlocul lui iunie până la mijlocul lui septembrie, adresa mea este 768 North Rengstorff, Mountain View, California, US.
Mountain View este mic orășel în California de Nord (la populația de 75 000 locuitori, abia a depășit mărimea Oneștiului), în inima zonei cunoscute ca Silicon Vally. Este un oraș liniștit, exact cum ai vedea în filme. Case cu grădină în față și nici urmă de blocuri. Dacă te uiți în orice direcție, vezi, în general, vreo 5 oameni în raza vizuală.

Dar acest mic oraș este cunoscut de toată lumea pentru un lucru: Google [1]. Pentru Google, Mountain View (MTV) este centrul universului. Și putem afirma că MTV este orașul Google, pentru că sediul companiei ocupă o bună parte din suprafață. Există și strada Google, deși sediul firmei se afla pe Amphitheatre Parkway. Tot aici mai regăsim sediul LinkedIn, alături de alte companii mai obscure din IT.

Orașul și-a ales denumirea foarte bine, “Priveliște de Munte”. În primul rând, nordul Californiei, ca vreme, nu e chiar cum ți-ai imagina, cu soare tot timpul. Din contra, este destul de răcoare, datorită aerului rece ce vine dinspre Oceanul Pacific și des cerul este plin de nori. Așa că temperatura este una de munte. Dar dimensiunea aparent mică a orașului (el face, totuși, parte dintr-o zonă metropolitană) împreună cu liniștea de aici, faptul că găsești conifere pe marginea străzilor și completând cu veverițele (ok, și sconcșii) pe care le(îi) vezi aproape zilnic, te fac să te simți fix ca la munte. It’s a wonderful place.

Un punct de atracție a MTV este Parcul Shoreline care se găsește pe malul lacului cu același nume. Parcul este un loc genial ca locație de weekend pentru un picnic. Dacă adaugi faptul că există și un teren de golf acolo, îți imaginezi locul tipic de relaxare pentru un american. Tot aici ce afla și Casa Rengstorff (după care și-a luat numele strada unde locuiesc), ce este cunoscută pentru stilul său arhitectural.

O mare atracție cunoscută în Silicon Vally a orașului este Computer History Musem [2]. Este un loc ce trebuie văzut dacă ajungeți în zona această. Drumul prin muzeu te duce prin istoria computerului, de la abac și Pascalină, la ENIAC până la Crey, Apple 1, Commodore 64  și până în prezent. Poți să vezi evoluția jocurilor (chiar să și joci PacMan), mouse-ului și procesoarelor, puncte cheie în evoluția limbajelor de programare sau inteligenței artificiale. It’s the geek’s musem.

Dacă nu iau în considerare timpul petrecut în campusul Google sau acasă, nu pot să spun că am petrecut mult timp în Mountain View, deoarece nu sunt chiar multe lucruri de făcut aici. Deși există o oarecare agitație vineri și sâmbătă seara prin centru (undeva între 8 seara și 2 dimineața).

[1] (Achievement unlocked: am pun link spre într-un post)


Python is like music…

Sunt foarte amator în ceea ce privește muzica (nu mă refer la ascultat muzică 😛 ). Și dacă nu era de ajuns că încercam să cânt fără talent la chitară, mai nou încerc să învăț să cânt la pian. Dar dacă la chitară oricine poate să cânte ceva, la pian, aparent ai nevoie de ceva cunoștințe de teorie muzicală.

Și m-am apucat de acorduri la pian, bazându-mă pe ce știam la chitară. Dar cum ureche muzicală nu am, calculam pe hârtie care sunt notele echivalente între cele două instrumente.

Dar parcă nu era eficient… hai să aducem puțin geekyness în ecuație. Și la Python sunt amator, dar totuși sunt mai talentat în programare :P. Așa că hai să scriem niște generatoare de note și acorduri. Câteva zeci de minute mai târziu, “biblioteca” python de muzică de mai jos.

Așa că cine știe teorie muzicală și vrea să invețe python, sau cine știe python și vrea să vadă ce e acela un acord, să se uite pe pychords.


def next_note(note):
# note can be A,B,C,D,E,F,G
	if note == 'G':
		return 'A'
		return chr(ord(note)+1)

def next_semitone(semitone):
# semitone can be A,A#,B,C,C# etc.
	if len(semitone)>1 :
		if semitone[1] == '#':
			return next_note(semitone[0])
	if semitone[0] == 'B':
		return 'C'
	if semitone[0] == 'E':
		return 'F'
	return semitone[0] + '#'

def next_nth_semitone(semitone, n):
	for i in range(n):
		semitone = next_semitone(semitone)
	return semitone

def next_pitch(pitch):
# pitch can be A0,A#0,C4 etc.
	if pitch[1] == '#':
		semitone = pitch[:2]
		octave = int(pitch[2:])
		semitone = pitch[0]
		octave = int(pitch[1:])
	semitone = next_semitone(semitone)
	if semitone == "C":
		octave = octave+1
	return semitone + str(octave)

def next_stream(init, function, size):
	stream = []
	for i in range(size):
	return stream

def major_chord(note):
	print note+" chord:"
	print note,next_nth_semitone(note, 4),next_nth_semitone(note, 7)

def minor_chord(note):
	print note+"m chord:"
	print note,next_nth_semitone(note, 3),next_nth_semitone(note, 7)

# Guitar with standard tuning (fret 0 is open fret)
print next_stream("E4", next_pitch, 13)
print next_stream("B3", next_pitch, 13)
print next_stream("G3", next_pitch, 13)
print next_stream("D3", next_pitch, 13)
print next_stream("A2", next_pitch, 13)
print next_stream("E2", next_pitch, 13)


Dacă am făcut ceva greșeli, commentariile sunt binevenite, cât timp sunt malițioase :P.

Și iată și rezultatul inițial dorit, și anume cum arată notele pe o chitară standard.

['E4', 'F4', 'F#4', 'G4', 'G#4', 'A4', 'A#4', 'B4', 'C5', 'C#5', 'D5', 'D#5', 'E5']
['B3', 'C4', 'C#4', 'D4', 'D#4', 'E4', 'F4', 'F#4', 'G4', 'G#4', 'A4', 'A#4', 'B4']
['G3', 'G#3', 'A3', 'A#3', 'B3', 'C4', 'C#4', 'D4', 'D#4', 'E4', 'F4', 'F#4', 'G4']
['D3', 'D#3', 'E3', 'F3', 'F#3', 'G3', 'G#3', 'A3', 'A#3', 'B3', 'C4', 'C#4', 'D4']
['A2', 'A#2', 'B2', 'C3', 'C#3', 'D3', 'D#3', 'E3', 'F3', 'F#3', 'G3', 'G#3', 'A3']
['E2', 'F2', 'F#2', 'G2', 'G#2', 'A2', 'A#2', 'B2', 'C3', 'C#3', 'D3', 'D#3', 'E3']

Poate să calculeze și acordurile majore și minore:

A chord:
A C# E
Bm chord:
B D F#

RIP lab: Send RIP routes to remote neighbours

[Originally posted on]

You have two routers running RIP, but the two routers aren’t directly connected because there is a third router between them. See topology below. How do you get routes across because RIP only communicates with routers that are directly connected?

The simple answer is to create a GRE tunnel between R1 and R3 so a tun interface simulates a direct connection of the two routers. But let’s take a more didactic approach to remember some things about RIP.

RIP v2 sends the updates to the address that is a local multicast address (TTL=1). But there is another, very important in some situations (like some Frame Relay networks), way to send routes, and that is via unicast to a statically configured neighbor. Configuration is done via the neighbor command in the router rip configuration. The routes will be encapsulated in normal IP unicast packets and since RIP runs on top of UDP, they should be routed as any other packet.


interface Serial0/0/1
ip address
interface Loopback 0
ip address
router rip
version 2
passive-interface Loopback0

no auto-summary


interface Serial0/0/1
ip address
interface Loopback 0
ip address
router rip
version 2
passive-interface Loopback0
no auto-summary

You still need to have a network command for the interfaces when you send and receive the updates (in this case otherwise the received updates will be ignored.

First thing you should be careful of is the fact that R1 and R3 need layer3 communication. So you do need static routes for the R1 and R3 routers through R2.

Having connectivity between each other, the router starts sending unicast packets with the routes. debug ip rip would show the following:

RIP: sending v2 update to via Serial0/0/1 (
RIP: build update entries via, metric 1, tag 0

Notice the update is sent to an unicast address and not

Routes are received but they still are not in the routing tables. debug ip rip shows why:

RIP: ignored v2 update from bad source on Serial0/0/1

This reminds us of how RIP works: if a router receives an update it checks to see if the source of the packet is on the same subnet as the IP configured on the interface. If they don’t match, the update is ignored. In our case, the source of the updates are not on the same network because R2 does not modify the packet source/destination in any way.

The solution to this is to disable the default mechanism with the no validate-update-source command in the router rip configuration. This way any updates will be accepted.

Here is a wanted route in the routing table of R3:

R [120/1] via, 00:00:27

Notice that the next hop is not directly connected so it need to do a recursive lookup and use the static route to send it to R2 first.

S [1/0] via

Ixia + UPB = 5 years

Astăzi a avut loc evenimentul anual Ixia [1] din UPB, ocazie cu care s-au și sărbătorit 5 ani de colaborare între firma de soluții de testare a rețelelor și Universitatea Politehnica București.

Ixia este o firmă din California, dar care, de mai mulți ani are o filială în România, ce s-a dezvoltat foarte puternic. Firma a investit foarte mult în acești ani în colaborarea cu UPB, fiind probabil cea mai vizibilă firmă pentru studenții de la Calculatoare.

Prima investiție, de acum 5 ani, a fost Laboratorul de Cercetare pentru Studenți, sala EG106, sală în care se țin și laboratoarele de Sisteme de Operare (SO – anul 3, Calculatoare). Anul acesta, Ixia a făcut din nou o investiție în laborator prin reînnoirea tuturor calculatoarelor, făcând sala EG106 cea mai modernă din facultate. De asemenea, Ixia sponsorizează și un concurs pentru studneții cursului de Sisteme de Operare 2 (SO2 – anul 4, Calculatoare), oferind premiu pentru cea mai bună temă. Concursul, ce se desfășoară anual, a avut premierea azi și a f0st premiată cu cu tablet Samsung Galaxy persoana ce a făcut cea mai bună impementare a unui protocol de transport în kerneul Linux. Tot în cadrul evenimentului au fost prezentate și proiectele de licență din cadrul facultății ce sunt sponsorizate de Ixia România.

Anul acesta, pe lângă concursul tehnic, Ixia a sponsorizat și un concurs de business planing [2], ce avut un premiu de 20 000$, bani ce vor finanța un start-up. Și pentru că cei de la Ixia au dorit să sponsorizeze și proiectele Open Source, a organizat o excursie pentru studneții de anul acesta ai cursului ROSEdu, CDL [3].


Ixia România este compusă din foarte mulți oameni tineri, majoritate dintre ei (cel puțin cei pe partea tehnică), fiind absolvenți de Calculatoare. Au o politică de promovare și angajare prin care investește foarte mult în oameni noi. Mulți studenți de Calculatoare fie au făcut un internship pe vară, fie s-au angajat la firmă după absolvire. Vara trecută am avut ocazia de a face un intership la Ixia în departamentul de IxOS (kernel development). Experiența a fost una foarte plăcută și Ixia România pare a fi una dintre firmele unde multor oameni le-ar plăcea să lucreze. Și pentru că nu am publicat un articol când am terminat stagiul, îmi pare bine că am ocazia acum.

Thank you, Ixia! And keep it up!




Cisco Expo 2011 Live

Sunt pentru a treia ediție consecutiv la Cisco Expo și nu puteam să nu scriu un articol despre el și să stric tradiția.

La fel ca anul trecut, evenimentul se desfășoară într-o singură zi, tot la JT Marriot. Tema de anul acesta este “Collaborative and Virtualization without Borders”.

Programul este împărtit pe trei bucăți:

* sesiuni comune, în sala principală, până la ora 12:00

* sesiuni în paralel în sălile mai mici, până la 15:00

* sesiuni tehnice până la 18:00

Până acum prezentările au sublinitat două buzz-word-uri: “social” și “cloud”. Și, practic, la orice prezentare se ajungea la Facebook, Apple și Google (sau, tehnic, la virtualizare).

Prezentarea interesantă, până acum, a fost o teleconferință cu Dr. Raed Arafat, care a prezentat o situație destul de neașptată: multe spitale și ambulanațe din România sunt dotate cu echipamente care oferă video conferintă. Mai multe spitale din Târgul Mureș comunică între ele prin servicii de tip Telepresence pentru ca medicii să ofere diagnosticuri la distanță. Ambulanțele pot trimite date despre starea medicală a unui pacient ce se află în drum spre spital și poate fi tratat de paramedici sub supravegherea la distanță a unor medici.

[11:50: End of part 1]

Am ieșit la pauza de masă, unde, ca de obicei era algomerație mare. Partea bună a fost mâncarea care a fost mult mai multă și mai bună ca anii precedenți (asta pentru cei care merg la conferințe doar pentru mâncare).

Imediat după masă am fost la prezentarea ținută de Cronus despre “Emerging Threats and Next -Generation Security”, care  a fost foarte interesantă.  Andrei [1] și Bogdan [2], de la Cronus, au prezentat evoluția motivelor  pentru care există “criminali IT”, de la a fi hacker pentru faimă, la a fi hacker pentru bani, până în prezent, când atacurile IT au în spate motive politice sau sociale. Ne-au prezentat nivelul la care atacurile pot face daune în ziua de azi (Hacking as a Service, pentru a fi în temă cu lumea cloud computing), cu exemple pe cazuri ca Stuxnet, dar au și prezentat soluția Cisco care ar trebui să vină să salveze ziua.

Statusul standurilor pe care am ținut să le văd și anul acesta a fost destul de neimpresionant. Partenerii clasici erau prezenți și doar două nume noi au fost. Din păcate, numărul de echipamente sau tehnologii prezentate practic la stand au fost destul de redus și firmele s-au concentrat mai mult pe a prezenta mape.  Singura chestie pe care am testat-o a fost WebEx pentru Android, instalându-mi pe Hero-ul meu.

Din păcate, la ultima parte, cea de sesiuni tehnice, nu am putut sta pentru că trebuia să plec. Așa că experiența mea la aceasta ediție a fost destul de scurtă.

[Personal] In Bruxelles…(III)

Când ne-am întors din Bruges, deja eram de 4 zile în Bruxelles, dar încă nu ieșisem să vizităm efectiv capitala.  Cum era deja seară când am ajuns în Gara Centrală din Bruxelles, cam totul era închis. Așa că am fost într-o berărie, La Mort Subite, unde am încercat niște sortimente noi de bere. Una dintre berile clasice este berea trapistă, care este făcută după o rețetă a călugărilor trapiști din niște mănăstiri (din Ordinul Trapist) belgiene și olandeze. Un sortiment mai special este berea “Lambic”, care se face doar într-o anumită regiune a Belgiei, deoarece procesul de fermentare are nevoie de o anumită bacterie ce se găsește doar în acea regiune.  Berea “Gueuze” este un alt tip de bere, cu un gust deja departe de o bere normală, care se obține din berea Lambic ce este fermentată timp de încă un an sau doi.

După ce am terminat, era prea devreme, așa că totuși am zis să facem o plimbare… în celălalt capăt al orașului. Am fost la Atomium, monumentul construit  pentru Expozitia Mondiala din anul 1958, atunci când Bruxelles-ul a găzduit evenimentul. La ora aceea, era deja închis de mult timp (ca, de altfel, tot pe o rază de un kilometru), dar arăta foarte interesant noaptea. Speram să mergem și a doua zi sa vizitam interiorul, dar nu am mai fost.

Ultima zi în Bruxelles este și ziua în care am fost turiști adevărați. Am luat harta și lista de muzeuri și am început să vizităm. Prima oprire a fost la Muzeul de Benzi Desenate, unde am aflat despre istoria comicurilor din Belgia și nu numai. Cel mai cunoscut comic din Belgia este “Aventurile lui Tintin”, despre care știam, dar uitasem că e din Belgia. Am luat un comic pentru cineva care știam că va aprecia. La magazinul din muzeu, erau de vânzare figurine de colecție cu persoanje din diverse desene.  Nu am putut sa nu iau o figurină cu Ștumfița pentru cineva.

Următoarea oprire a fost la Muzeul de Instrumente. Instrumente de toate tipurile, din toate timpurile și din toate regiunile lumii (am văzut destul de multe din România).  Foarte interesant a fost faptul că la intrare primeai o pereche de căști fără fir și dacă te apropiai de o vitrină cu un instrument, se activa în căști o melodie cântată cu acel instrument. După ce am parcurs cele 4 niveluri deschise publicului ale muzeului, am urcat la ultimul etaj,la 10, unde era o terasă în aer liber de unde aveai o priveliște foarte frumoasă. Se vedea inclusiv Atomium-ul, care era la o distanța destul de mare.

Și cum eram în Capitala Europei, nu puteam să nu mergem la sediul Parlamentului European. Deși nu aveam ce vizita acolo, am trecut să vedem cum arată. La intrare, era câte o tăbliță cu “Parlamentul European” scris în fiecare limbă a țărilor din UE (inclusiv în română).

Ne-am îndreptat spre Grand Place, adică centrul orașului, unde am făcut cumpărăturile pentru cei de acasă. Suveniruri și ciocolată, în limita a cât mai puteam îndesi în bagaje și să ne accepte la aeroport. În apropiere era și faimoasa fântână a lui Manneken Pis (aflați voi despre ce e vorba).

Înainte de a ne face bagajele, ne-am întors pentru ultima dată în Delirium, unde ne-am așezat la subsol, unde nu fusesem în serile trecute. Acolo aveau de vânzare toate cele 2000 de sortimente de bere. Tot tavanul era decorat cu logourile berilor vândute în pub. Muzica era just awesome, făcând ultima noapte una de amintire. Am plecat târziu știind că metrourile circulă toată noaptea, dar surpriza a fost că la mijlocului drumului către cazare, ne-a dat afară din metrou și am mers pe jos o bună parte din oraș, cântând ce melodii ne mai aduceam aminte din Delirium și nu numai.

Excursia la Bruxelles a fost foarte faină, mai ales datorită oamenilor cu care am fost. Am cunoscut oameni noi și am cunoscut mai bine prieteni mai vechi. Făcut multe poze (nu eu, ci echipa de fotografi semi-profesioniști din grupul nostru) și văzut multe lucruri interesante. Aaa… and lots of beer 😉

The end.

[Personal] In Bruxelles…(II)

Luni am mers la Bruges (sau Brugge, depinde de limbă), un oraș medieval din Belgia și un loc foarte frumos. Am mers cu trenul până acolo, făcând o oră. Gara Centrală din Bruxelles este incredibilă, cu trenuri venind cam o dată la două minute. Am iesit foarte bine cu prețul biletului pentru ca am luat un set de 40 de bilete (călătorii) ce au costat 200 euro și, fiind 17 oameni, am consumat 34 dintre ele (dus-întors) ieșind cu ~12 euro de persoană față de prețul normal de bilet dus-întors de student de 23 euro.

Gara din Bruges arăta foarte modernă, dar am ieșit din ea într-un oraș unde parcă eram în secolul 18. Bruges este un oraș foarte frumos („a fucking fairytale”, pentru cei ce sunt familiari cu filmul ce poartă numele orasului). Partea medievală a lui (excluzând suburbiile moderne) este integral încojurată de un canal în formă de cerc, construit în evul mediu pentru apărare, intrarea în oraș făcându-se prin anumite puncte, egal depărtate, păzite de turnuri de apărare. Un mare canal traversează pe diametru orașul și se împarte în mai multe canale mai mici. Nu am putut, din păcate să merge cu barca pe canal în perioada aceasta a anului.

Principala atracție a fost turnul din centrul orașului unde am urcat 366 de trepte până în vârf, unde am găsit mecanismul ceasului care cânta, mecanism similar cu o cutiuță muzicala, dar cu un tambur nu de câteva grme ci de… 9 tone. A fost interesant că după ce am plătit biletul la turn, am putut folosi biletul pentru a vizita primăria orașului. Din păcate, am avut doar două camere de vizitat acolo, dar a fost fain pentru că am văzut harta medievală a orașului unde am văzut cum a fost gândită topologia sa.

Plimbarea prin Bruges a meritat timpul și biletul de tren,chiar dacă mare parte din obiectivele turistice erau închise. Singurele locuri deschise erau restaurantele (care erau cam scumpe), fast-food-urile cu mâncarea, apararent tradițională, de cartofi prăjiți cu maionează, și magazinele de ciocolată. La fiecare 5 metri era un magazin în care vedeai în vitrină bunătățuri făcute din ciocolată. Era un chin să treci pe lângă ele fără să cumperi TOT ce era înăuntru.

[Personal] In Bruxelles…(I)

Sunt la încheierea unei excursii de 5 zile la Bruxelles, în Belgia, care s-a dovedit a fi o experiență foarte frumoasă. Scopul principal al ei a fost conferința FOSDEM 2011, ce s-a desfășurat în weekend-ul 5-6 februarie în capitala belgiană. La eveniment am fost împreună cu mai multi prieteni (în mare parte din ROSEdu) și chiar dacă FOSDEM ținea doar două zile, ne-am rezervat alte trei zile pentru noi să vizităm Bruxelles-ul și împrejurimile.

Bruxelles-ul este un oraș care, deși nu încredibil de mare față de alte orașe, pare extrem de mare deoarece este foarte intortocheat, cu străzi înguste și dealuri multe. În mare, este o capitală tipică europeană, cu un centru foarte aglomerat și destul de dinamic și cu suburbii care par pustii din cauza liniștii.

Modul principal de transport pentru localnici, dar mai ales pentru turiști este metroul. Are o rețea destul de complexă care te duce oriunde în oraș. Ca turiști, am ales să ne luăm fie un bilet individual valabil câte trei zile pe orice linie și pentru oricâte întrări, fie un bilet pe o zi, tot nelimitat, dar care era valabil pentru un grup de maxim cinci persoane. Deși destul de scump biletul, își merita valoarea pentru că se vedea că au investit foarte mult în sistemul de metrou din stațiile absolut gigantice exitente. Metrourile veneau destul de des și erau asemănătoare cu cele din București (erau unele noi și unele vechi). Imediat ce am făcut rost de bilet de metrou și am găsit cazarea noastră (am nimerit un apartament foarte mare și frumos) am început să ieșim prin oraș.

Prima seară a fost petrecută la un pub nimit „Delirium”, loc cunoscut pentru numărul mare de sortimente de bere disponibil. Deși aglomerat, era foarte interesant pentru că era bere bună, muzică bună și pentru felul în care arăta în interior. Deși am mai fost și la altă berărie, „La Mort Subite”, n-am întors și a doua oară în Delirium, unde am stat la o masă sub formă de butoi. Tot în Delirium am fost și în ultima seara, înainte să plecăm, ajungând să încerc în cele 3 ieșiri mai mult de 10 sortimente de bere (destul de puțin față de numărul total :P).

Mare parte din weekend l-am petrecut la Universitatea Liberă din Bruxelles, la FOSDEM sau pe drumul din și spre el. Destul de deranjant a fost faptul că nu prea există mâncare destul de diversificată pentru turiștii fără mulți bani sau mult timp de stat, ajungând să petrecem multe zile în care am trăit pe bază de sanwich-uri. De asemenea, foarte enervant a fost faptul că totul se închidea undeva pe la ora 19:00 și duminica totul era închis toată ziua.


These days I’m in Bruxelles, .be, at FOSDEM 2011 [1], together with friends from ROSEdu.
The Free and Open Source Developers’ European Meeting is a two day conference that brings together Open Source enthusiasts, stuffs them into a building and waits for them to fight with each other in geekiness.
The two day schedule is very crowded, from 9 AM to 6 PM, with event in 10 rooms at the same time. Alongside the presentations, communities and companies have stands in the hallways. Everyone who is anyone is here. Fedora, Mandriva, CentOS, OpenSUSE, Debian and Ubuntu, Gnome and KDE, Mozilla, OpenOffice and LibreOffice, PostgreSQL, BSD, Perl and many others. You can buy T-Shirts, badges and other geeky souvenirs from practically every stand (I bought a couple of gifts I can’t wait to give). O’Reilly has a huge list of open source related books for sale. brought assurers for the Web of Trust (I didn’t get to assure any new people, but I did do some 0 points assurances of other assurers). In the Embedded building, communities/companies like BeagleBoard have a showcase for embedded devices that run Android or other embedded distros.
The presentations were form boring to very interesting, but I didn’t get to see more than a few. The first one I went to was a bout LLVM, a new compiler that is suppose to be the next gcc. Went to one about HTML5 and it was the first time I heard talking about the fact that “HTML5 is here” and not “HTML5 is coming” (I can’t wait to hear the same thing about IPv6) and learned some interesting things about HTML5. One more presentation, on a similar topic was about “The browser as a desktop” and how the web will evolve. Another one was about Google’s Go programming language… interesting, but I still didn’t get why Go was better than other languages. As part of the lightning talks of 15 minutes, an interesting one was about CyaTLS, an implementations similar to OpenSSL, only for embedded devices. Another interesting presentation was one from OpenStack about open source Cloud solutions, but could have used more technical details. But the most interesting presentation for me was the very last one, “How kernel development goes wrong”, from a Linux kernel developer with an inside look into the Linux Development Community.
The event was interesting. talked to some people there (for example some guys from Mozilla Europe that told me about a rising community in the Balkans, so that would Include Romaina, and told him that maybe we might collaborate). I learned about some new things, found out more about already known things. So, overall, it was an interesting experience.


Private Networks – Introduction and Legacy Solutions

META: This article is a draft for a chapter of my Research Paper for this semester.


An Enterprise Network is usually a network of a medium-to-large company that has multiple branches in different geographical locations, each branch with its own local data networks. The branches need to communicate in order to access each other’s resources (for example the company’s centralised database). Having its own direct cable connections between all the branches is practically impossible, so the company will have to depend on a Service Provider (SP) for interconnecting the sites. This can be done in several ways, using different technologies and protocols, each with its pros and cons, varying in price, ease of implementation, features, throughput and security.

A generic topology consists of the following:

P – Provider Equipment
PE – Provider Edge Equipment
CE – Customer Edge Equipment
C – Customer Equipment

The Service Provider has its network (cloud) of PE and P equipments, the Provider Equipments being in the core of the network and the Provider Edge Equipments at the border. The client company has in each branch a CE connected to a PE, and behind the CE all the rest of the Customer Equipment.

Depending on the technologies used, each of these equipments can take different forms.

Leased lines

The most basic connection type for the Service Provider to provide a leased line. This would practically give the company a virtual cable between two locations. The edge routers in the branches would see each other as they were directly connected, as a point to point connection.

Having a leased line gives you control over both Layer 3 and Layer 2. This means that the company can choose the encapsulation of the line. It can go for a simple PPP connection, or PPP with PAP or CHAP configuration for privacy against Layer 1 attacks, or configure compression for traffic or any other encapsulation wanted. The edge routers would be in a common broadcast domain meaning that the company can also chose the Layer 3 protocol (IPv4, IPv6,

Leased lines are rather expensive and they do not scale well. The approach is acceptable if the company has two branches, but for n branches, n*(n-1)/2 lines would be needed to have full connectivity. Frame Relay could be used to solve this problem.

Frame Relay

To have a scalable network, the company could use a technology like Frame Relay, connecting several sites over the Server Provider’s network infrastructure. Frame Relay is a Layer 2 protocol and it connects the company’s
edge routers to a Frame Relay Cloud (ran by the Service Provider). The SP is in charge of providing point-to-point or point-to-multipoint connections between routers by the use of Virtual Circuits (VC).

Several companies can use the the same physical infrastructure of the SP, but each company will have its own set of Virtual Circuits so data will not be visible between companies, securing privacy of data.  The Virtual Circuites are switched in the FR Cloud with the use of an identifier called a DLCI that is attached to each frame sent in the Cloud. The SP will use DLCIs to get data from one edge router to another. It is easier and cheaper to have new Virtual Circuits than new physical connections between different sites. But the fee of the SP is still on a per VC basses, so rather than having full mesh topologies companies will chose hub and spoke topologies (the Headquarter usually being the hub).

Configuration example (IOS based equipment)

hostname CE1
interface Serial2/0
ip address
encapsulation frame-relay
serial restart-delay 0
clock rate 128000

hostname PE1
interface Serial1/0
no ip address
encapsulation frame-relay
serial restart-delay 0
clock rate 128000
frame-relay intf-type nni
frame-relay route 300 interface Serial2/0 100
interface Serial2/0
no ip address
encapsulation frame-relay
serial restart-delay 0
clock rate 128000
frame-relay intf-type dce
frame-relay route 100 interface Serial1/0 300
frame-relay route 102 interface Serial1/0 102

hostname P
interface Serial1/0
no ip address
encapsulation frame-relay
serial restart-delay 0
clock rate 128000
frame-relay intf-type nni
frame-relay route 300 interface Serial1/1 400
interface Serial1/1
no ip address
encapsulation frame-relay
serial restart-delay 0
clock rate 128000
frame-relay intf-type nni
frame-relay route 400 interface Serial1/0 300

hostname PE2
interface Serial1/1
no ip address
encapsulation frame-relay
serial restart-delay 0
clock rate 128000
frame-relay intf-type nni
frame-relay route 400 interface Serial2/0 200
interface Serial2/0
no ip address
encapsulation frame-relay
serial restart-delay 0
clock rate 128000
frame-relay intf-type dce
frame-relay route 200 interface Serial1/1 400
frame-relay route 201 interface Serial1/1 201

hostname CE2
interface Serial2/0
ip address
encapsulation frame-relay
serial restart-delay 0
clock rate 128000


These two solutions, very commonly used until recently, are by design private because the traffic between the company’s offices can’t be seen by anyone except the Service Provider. If layer 2 or upper mechanisms of data encryption are used, even the SP will be prevented from reading the data. The company can’t be attacked with malitios data because outside traffic won’t reach the Customer Equipments.

The downfall of these solutions came with the rise of the Public WAN, the Internet. A company that wanted a WAN  connection between the sites and connection(s) to the Internet needed to purchase two separate services. Because of the cheap nature of the Internet, companies preffer to have the Internet connections for their offices and also use it as a way of connecting different branches. This solves some problems, but introduces others.