Awin's Reading

Wednesday, April 26, 2006

virtualization.info: How to stress test virtual machines

"Performances are the greatest concerns CIO/CTO usually have approaching virtualization.
You surely would compare a virtual machine performance against a physical server, but you could also be in need of exploring how different virtualization technologies perform.

The first aspect you should test is I/O performances: physical raw partitions, proprietary filesystems, remoted SANs systems, local virtual IDE or SCSI disk subsystem. All of these configurations should be tested and compared with each other and against physical machines I/O performances.

Another second aspect you could test is network performances since virtual network adapters devices can handle traffic in different ways and be more or less efficient.

The best way to stress test a VM is to use standard tool for physical machines stress testing.
And just in case you are new to this practice below I compiled a list of great, ready to go, free tools:

I/O performances

Intel IOMeter

Microsoft SQLIO Disk Subsystem Benchmark Tool

Network performances

JMeter

Microsoft Exchange Server 2003 Load Simulator

Microsoft Web Application Stress Tool

Microsoft Web Capacity Analysis Tool

OpenSTA

SLAMD Distributed Load Generation Engine

Soap-Stone

I found Intel IOMeter and Microsoft Web Application Stress Tool both great for stress tests.

If you are going to test VMware ESX performances you should also absolutely check the VMware ESX Server Performance Troubleshooting lab manual released at VMworld 2005."

An Introduction to Virtualization

Microsoft acquired Connectix Corporation, a provider of virtualization software for Windows and Macintosh based computing, in early 2003. In late 2003, EMC announced its plans to acquire VMware for $635 million. Shortly afterwards, VERITAS announced that it was acquiring an application virtualization company called Ejascent for $59 million. Sun and Hewlett-Packard have been working hard in recent times to improve their virtualization technologies. IBM has long been a pioneer in the area of virtual machines, and virtualization is an important part of IBM's many offerings. There has been a surge in academic research in this area lately. This umbrella of technologies, in its various connotations and offshoots, is hot, yet again.

Complete article: An Introduction to Virtualization

Tuesday, April 25, 2006

Virtualization - Wikipedia, the free encyclopedia

"In computing, virtualization is the process of presenting a logical grouping or subset of computing resources so that they can be accessed in ways that give benefits over the original configuration. This new virtual view of the resources is not restricted by the implementation, geographic location or the physical configuration of underlying resources. Commonly virtualized resources include computing power and data storage. A good example of virtualization is modern symmetric multiprocessing computer architectures that contain more than one CPU. Operating systems are usually configured in such a way that the multiple CPUs can be presented as a single processing unit. Thus software applications can be written for a single logical (virtual) processing unit, which is much simpler than having to work with a large number of different processor configurations. A new trend in virtualization is the concept of a virtualization engine which gives an overall holistic view of the entire network infrastructure by using the aggregation technique."

Complete article: Virtualization - Wikipedia, the free encyclopedia:

Monday, April 24, 2006

Interview: Andrey Savochkin

KernelTrap has a fascinating interview with Andrey Savochkin, the lead developer of the OpenVZ server virtualization project. In the interview Savochkin goes into great detail about how virtualization works, and why OpenVZ outshines the competition, comparing it to VServer, Xen and User Mode Linux. Regarding virtualization, Savochkin describes it as the next big step, 'comparable with the step between single-user and multi-user systems.' Savochkin is now focused on getting OpenVZ merged into the mainline Linux kernel."

Read More: Interview: Andrey Savochkin

Technology News: Commentary : Why Linux May Never Be a True Desktop OS

With Linux, the customer often expects to get the product for free and wants the retail price of Windows deducted from his/her purchase price. There are no funds passed back to the vendor and, because Linux is different, customers tend to place more service calls -- at $85 a call. As a result, the vendor generally ends up losing money.

Read more: Technology News: Commentary : Why Linux May Never Be a True Desktop OS

Sunday, April 09, 2006

Polyphasic sleep - Wikipedia, the free encyclopedia

Polyphasic sleep is a sleep pattern specification intended to reduce sleep time to 2–5 hours daily. This is achieved by spreading out sleep into short naps of around 20–45 minutes throughout the day. This is supposed to allow for more waking hours with relatively high alertness.
The method uses natural human sleep mechanisms to maximize alertness when sleep time needs to be minimized. However, it requires a rigid schedule which makes it unfeasible for most people. It can work well for those engaged in activities which do not permit lengthy periods of sleep.

Read more

How to have a 36 hour day

By: Jon’s blog @ Zaadz

How many times do you hear someone say “I wish there were more hours in the day” or something along those lines? The fact is that all of us are only given 24 hours. Having said that, how we spend those 24 hours varies radically from person to person. It’s become a bit of a cliche by now but the 24 hours we have is the same 24 hours that Thomas Edison and Mother Theresa had and that Oprah Winfrey and Bill Gates currently have. As the old song goes “It’s in the way that you use it.”
But what if we had more than 24 hours in a day?
Not possible? I disagree. While we can never have more than 24 hours of chronological time I think it’s very possible to have many more hours of functional time. In fact, I think it’s probably possible to get up to 36 hours of functional time in your day if you do a few relatiively simple things. So without further ado, here is my prescription for the 36 hour day .
It’s a list of ways to save time that you may or may not have thought of. Implement a few of them and you’ll likely open up a couple of hours each day that you didn’t previously have . Implement all of of them and you just might find yourself with too much time on your hands. File that under “Good Problem to Have” right?
So here are 10 ways that you can radically change your life and free up the time you didn’t know that you could.
36 Hour Day Strategy #1: Optimize Your Sleep - Some of us can get by just fine on 3-5 hours a sleep a night (I’m jealous of you!) while others “need” 9+ hours to feel rested. Certainly a good portion of this is genetic and perhaps environmental. Having said that I tihnk that there are ways that all of us can sleep less and at the same time feel more rested. Here are a few suggestions:
Wake up at the same time every morning - I first came across this through Steve Pavlina’s excellent blog . I’ve been trying it for a little while and totally dig it. It’s a simple concept. Just set your alarm clock for the same time each morning, wake up when it goes off and then go to bed at night when you feel tired and not before. Steve claims it can free up 10-15 hours a week. I think he’s totally right.
Make your room a quiet, dark cave - For too many people the bedroom is a source of activity, light and noise. Do your best to minimize the amount of sound in your bed room (consider buying an air cleaner or white noise generator if you live in noisy apartment building or neighborhood). Take steps to eliminate or reduce the light that comes into your bedroom while you sleep (heavy curtains or dark room material on the windows work well here). And do your darnedest to remove stimulus from your bedroom (e.g., TV, lots of clutter, etc.)
Experiment with polyphasic sleep - Polyphasic sleep is a sleeping pattern that proposes to reduce sleep down to 2-5 hours a day. I haven’t tried it yet so I can’t speak to its validity but you back to Steve’s blog again for some great information on this unusual but potentially effective sleeping method.
Time Savings from Optimizing Your Sleep = Approximately 1.5 Hours
36 Hour Day Strategy #2: Optimize Your Diet
The human body spends more of its energy on digestion and elimination than anything else . What you put into your body in the form of food and drink will definitely have an impact on your energy levels as well the amount of sleep you’ll need. A few years back I was pretty heavy into weightlifting and was eating a ton of calories and lots of protein every day. The result? I need to sleep a *ton* to feel rested. Sometimes 10-11 hours a night (the hard workouts didn’t help either).
Now my diet has done a 180 and I’m eating a much better (but far from perfect) mix of vegetables, fruits, whole grains and healthy fats and oils. The difference in energy is dramatic and I sleep a lot less than I previously needed to. My diet still needs improvement but these changes have literally added hours to my days.
I’d recommend a few resources for people looking to save time by improving their diet. The first is Tony Robbins’ Living Health course. Tony has more energy that any person I’ve ever seen and that’s a great testament to his health and fitness regimen. He has based a lot of his information on the work of Dr. Robert Young and thus I would recommend Dr. Young’s book The pH Miracle as well.
Finally, consider going on a cleanse . I recently went on a four-day cleanse as outlined in the pH Miracle book and I’ve had a lot more energy in the week and a half since I went off it. The book Juice Fasting and Detoxification also helped me through a pretty intense (both physically and emotionally) four days and I’d recommend that one as well.
Time Savings from Optimizing Your Diet = Approximately 1.5 Hours
36 Hour Day Strategy #3: Multi Task
OK, this is a given right. If you do two things at the same time you will be able to do more during your day. But isn’t multitasking bad? The lady driving down the highway with her cell glued to her ear is probably not the best model for multitasking. The guy you had lunch with yesterday who checked his Blackberry 17 times before they brought the main course out isn’t doing anyone any favors with his technology-enabled form of ADD.
But I’d argue that multi-tasking, when done right, is one of the best ways to save time throughout your day. Combining talking on the phone and “brain dead activities” is a great way to multitask. For most people, doing laundry or washing the dishes is an activity that takes no thought. Why not use that time to make a few phone calls and kill two birds with one stone? But remember, checking e-mail or watching TV are not brain dead activities. And nothing is more annoying than having a phone conversation with someone who is not fully present.
Another great way to multi task is to incorporate exercise into your activities. Need to get together with a friend to catch up? Meet them for a jog and get caught up while you knock out your daily workout. I’ll often stretch (it’s good for you!) while I’m reading or at my computer (I’ve got one those exercise balls that allows we to stretch while I’m checking e-mail…kinda geeky but it works for me!).
Something else I do is to do a series of exercises created by a gentleman named Pete Egoscue . These exercises are designed to improve flexibility and range of motion and prevent injury. And many of them can be done while reading, on the phone, etc. I’d highly recommend Pete’s book Pain Free for anyone interested in these.
There are a ton of ways that you can incorporate exercise into your daily routines without taking any extra time out of your day. It’s really a great way to free up your schedule and keep your body in tip-top shape.
Time Savings from Multi Tasking = Approximately 2.0 Hours
36 Hour Day Strategy #4: Get Organized
You really owe it to yourself to get organized because it will save you both time and stress. There are a number of different ways and strategies for getting organized. One of the best that I’ve found (and use personally) is David Allen ’s Getting Things Done methodology. GTD , as it is more commonly referred to, is a system for capturing and managing the things that you need to do and remember. It’s remarkably effective in that it gets all of the little things out of your head which frees up your “psychic RAM” for more productive thoughts and results in increased creativity.
David Allen’s system isn’t the only one out there. A lot of people will use Franklin-Covey, Tony Robbins’ life management system or any of a number of other planning systems. I’m not convinced that there’s one best system out there but I think it’s important for all of us to use some sort of a system so that “Remember to buy toothpaste” isn’t consuming even an ounce of our mental energy.
There’s a ton of info about GTD online for free and the investment you’ll make in learning one of these systems will pay off in spades. Not only will you be more productive but you’ll also feel less stressed which will result in more energy and once again will add hours to your days.
Time Savings from Getting Organized = Approximately 1.0 Hours
36 Hour Day Strategy #5: Improve Your Typing Speed
In this computer age, the keyboard is often our primary form of communication with many people. This is a wild ass guess but I’d say that the average person probably spends about 1-2 hours a day typing. This could be e-mails, IMs, memos, reports, etc . Certainly for some people this number is much higher and for others it is low. So let’s just say an average of 1.5 hours per person for now.
Now let’s assume that you currently type 40 WPM. If you improved your typing speed to 60 WPM you would save 33% of the time you are currently spending typing. Improve it to 80 WPM and you’ve now saved 50%. That’s probably a half an hour or 45 minutes a day you’ve saved. Over the course of a year or a decade (not to mention a lifetime) this results in a *huge* savings.
It’s amazing that we invest in all of these productivity applications in businesses and yet you have many people who are still hunting and pecking at their keyboard. That’s just crazy to me. The faster you type the better you can communicate plain and simple. The keyboard becomes a natural extension of you vs. some impediment to exchanging information and sharing yourself with the world.
I’d highly recommend investing a little time (even just a few minutes a day) in improving your typing. A program that I use for this is TypingMaster and I love it. It’s easy to use and can even be configured to track your real-world typing so that it can incorporate words you commonly mis-type into its drills. This is definitely a great way to save time on a daily basis.
Time Savings from Improving Your Typing Speed = Approximately 0.75 Hours
36 Hour Day Strategy #6: Improve Your Reading Speed
Just as with typing, improving your reading speed can make you more productive and save you tons of time. It also varies a lot but I’ll assume that each of us again spends on average between one and two hours a day reading. Whether this is the morning paper, e-mails at works, research for your job or for school or the latest book we all have a need to be continually reading in this day and age.
The fact of the matter is that most of us don’t read all that well. We read slow and we often have to read things multiple times to understand what’s going on. And in the end that either reduces the amount of stuff we end up reading (if you read slow and have trouble comprehending reading just won’t be enjoyable to you) or results in a lot more time invested in reading than necessary.
As with typing there are ways to improve your reading abilities. Here are a few that I’ve incorporated:
Active Reading - One of the reasons why many of us don’t read that well is that we’re entirely passive when reading. The brain engages much more when it is active and the best way to encourage this is to make notes while reading. If you’re reading a book then mark the hell out of it. Underline passages, jot notes, etc. You’ll find that your comprehension will go way up as will your reading speed (even after accounting for the time spent marking up your book). One of the best parts about making notes is that you can return to the material later and review it more quickly and effectively.
EyeQ - Off and on over the last few years I’ve been using a software application called EyeQ to improve my reading speed. I think it’s the fastest and easiest way for a person increase their ability to rapidly process information. It works by getting you to move your eyes more quickly through material. This results in an increased ability to filter out words that are meaningless (a, an, the, etc.) as well as a reduced reliance on subvocalization .
Photoreading - I took a class in Photoreading a few years ago and while I’m still not convinced that it’s 100% legit any system that claims to increase reading speed to 25,000 words per minute or more is definitely worth checking out. For people who have a ton of reading to do (e.g., graduate students, attorneys, etc.) something like Photoreading could possibly revolutionize their lives and free up tons of time.
Time Savings from Improving Your Reading Speed = Approximately 0.75 Hours
36 Hour Day Strategy #7: Learn Out Loud
Probably the #1 reason why I started LearnOutLoud.com is that I believe so strongly in the power of audio learning to literally add hours to peoples’ lives and provide increased enjoyment of, and fulfillment during, times which have historically been frustrating and unproductive (e.g., the morning commute).
Audio learning is the perfect multi-tasking activity. Most people who know me know that I’m listening to audio books, podcasts, etc. several hours every day. I’ll do this whenever I’m driving, while exercising, doing stuff around the apartment, etc. I’ve been able to crank through an unbelievable number of books in the last year (including unabridged versions of My Life by Bill Clinton and The World is Flat by Thomas Friedman) that I never would have found the time to sit down and read. Likewise, I’ve been able to virtually “attend” conferences like South by Southwest and the World Economic Forum thanks to the miracle of podcasting.
Thanks to the iPod and other portable MP3 players it’s never been easier to learn out loud. One of my favorite things to do is to go for a run with a few podcasts or an audio book queued up. In fact, I recently completed the LA Marathon while simultaneously listening to the first half of John Battelle’s book The Search (read more on that here ). It was kind of fun to know that I was getting both a workout for my body and for my mind.
We’ve essentially set up LearnOutLoud as the epicenter for what I truly feel will be an audio learning revolution in upcoming years and decades. People are increasingly pressed for time and the opportunity to listen to the information you need to consume rather than having to read it opens up a lot of doors. It’s a great way to stay on top of all the information and trends that affect your world and that’s why I’m so excited about it.
Time Savings from Learning Out Loud = Approximately 1.5 Hours
36 Hour Day Strategy #8: Use Software To Your Advantage
The right software can bring huge time savings to your life. Certainly not all software will save you time. In fact, some applications can actually be huge time sucks. Anyone ever hear of Minesweeper? But there are some programs out there that will add minutes to your days and hours to your weeks and months. Here are some that I’ve stumbled upon:
ActiveWords - ActiveWords is a macro application that allows you to assign hot keys to repetitive tasks. We use this a lot in our business to save time and it could certainly save you time in your personal life as well.
Here’s a simple example of how I use it. Let’s say that someone is coming by the office for lunch. I want to give them fairly detailed directions via e-mail. One option would be to type up directions each time. That’s really a waste as I’m writing the same thing everytime. Another option would be to type up the directions and put them in a text file and then cut and paste them into my e-mail each time I needed them. That does save time but I still have to find the text file on my system each time and do the cut and paste. What ActiveWords allows me to do is to assign a hot key or phrase to my directions. Now all I have to do is type “officedirections” and hit F8 and the directions will automatically be inserted into my e-mail. Cool huh?
There are a ton of ways to use this nifty little application and I feel that I’m just scratching the surface of its usefulness.
Cloudmark Spamblocker (or other anti-spam software) - If you’re manually processing and deleting spam you’re just wasting your time. The investment in a good spam blocker is well worth it. I’ve been using Cloudmark’s product for several years and I really like it. Almost all my spam gets blocked and rarely does a legitimate message end up in my spam folder.
Another solution is to use GMail (or another web-based app) for your e-mail. These systems end up doing a pretty good job of filtering spam as well. And now a lot of these services have advanced functionality so you can use them and have the e-mails still appear to be coming from your domain (e.g., jon@learnoutloud.com rather than learnoutloud@gmail.com ).
Bloglines (or other RSS aggregation software) - I follow 50+ blogs on a number of subjects including technology, new media, audio books, podcasting, U2 and of course Dilbert . There’s no way I’d be able to stay on top of all of this stuff without the help of a piece of software that puts all these blogs in one place and shows me what new updates have been made to each of them. I use Bloglines and I love it. Not only can I read blogs when I’m at the computer but there’s even a mobile version of Bloglines so I can read blogs from my Blackberry.
Blogs are increasingly becoming the best way to consume information online and so if you haven’t set up an aggregator yet I’d definitely recommend it. There are dozens of aggregators out there and while Bloglines does the trick for me you may want to look at the other apps to find one that works well for you.
Time Savings from Using Software To Your Advantage = Approximately 0.5 Hours
36 Hour Day Strategy #9: Cut Your TV Time in Half
Depending on what study you look at you’ll find that the average person watches something like four hours of TV a day. That boggles my mind. We’re incredibly busy and yet we somehow find a way to spend four or more hours a day watching television?!!! Crazy…
Now I’m not one to say that all television is bad or that mindless entertainment is never a good thing. There are definitely some TV shows and there’s of course a time and a place for turning the brain off for a bit. I have no beef with that but what disturbs me is when people give huge chunks of their life to an activity that doesn’t really provide any meaningful benefit in most cases.
A year and a half ago I turned off my cable service and I haven’t missed it at all. I’ve got a Netflix subscription so I can have a few movies handy for times when I want to watch them. And if there’s a big game on (like last night’s incredible UCLA win…Go Bruins!) then I can typically find a place to watch it with some friends. What I have noticed is that the activity of sitting down “just to see what’s on” has become entirely foreign to me. And I think that’s a very good thing.
So I’m not saying you have to go to the extreme and shut your TV off. Just be conscious of what you’re watching and why. And see if you can’t reduce the amount of time you spend watching TV by 50%. If you currently watch four hours a day you almost assuredly can get by watching two hours a day. I mean there are some good shows on but not that many good shows…
Time Savings from Cutting Your TV Time in Half = Approximately 2.0 Hours
36 Hour Day Strategy #10: Get Help from Others
The final way to have a 36 Hour Day is to look for opportunities to have other people help you out with stuff. A lot of this definitely depends on factors like what your job is and how much money you have. If you’re the CEO of a Fortune 500 company you can probably find people to do a lot of stuff for you and will have no probably paying them to do so. But what about the rest of us?
First of all, don’t discount people’s interest in helping you out for free. Let’s say you are moving in a few weeks. Why not ask several friends to help you out? It certainly makes the load a lot easier and saves you time.
Another possibility is trading things you are good at for things you need help with. For instance, let’s say you need help with housecleaning. Perhaps you can find someone whose English skills aren’t that good and offer to tutor them in English in exchange for help with cleaning. You’ll save time and they’ll benefit from your help resulting in a win-win for both of you.
There are tons of opportunities like this if you just keep your eyes open for them. Of course asking someone to help you out means being willing to help if you’re asked to. But with all this time you’re saving this shouldn’t be a problem right?
P.S. There’s another great way to save time when you’re researching something or looking for information. There are a number of services online that will help you for free or a nominal charge. For instance, when I have a tech problem I’ll often post it to Experts Exchange and I’ll usually get back an answer within hours or even minutes. For non-techie questions I’ll use a service like Google Answers . There’s a small fee associated with getting questions answered but you can set the amount and it’s almost always worth it in terms of the amount of time you save by getting someone to help you out with the research.
In addition to services like this there are thousands of message boards on the Internet staffed with volunteers who can help you answer many questions. Back in the day I started one of these message boards at CertTutor.net and it has helped thousands of people get their technology certification questions answered. It’s just one of many like it out there in just about every subject you can imagine.
Time Savings from Getting Help from Others = Approximately 0.5 Hours
So as we add these up we find that there’s the potential here to say 12 hours of time each day. Wow. Certainly your mileage with vary with the strategies but hopefully you can implement some of them in your daily life. Time is the most precious commodity on the planet and by saving time in some areas you’ll have more time for doing the things that are truly the most important to you and for pursuing your goals and following your bliss. And if we all do that…well, I think that will change the world.

Saturday, April 08, 2006

Cisco phases out 1700, 2600 and 3700 series routers

Virus threatens PCs running Linux or Windows

Friday, April 07, 2006

Has LinuxWorld peaked?

Every time there's a LinuxWorld, there is a certain need to define what it's all about. One year it was about integration. One year about the desktop. One year about virtualization. This year, it's about virtualization.
Yes, again.
Like a rerun of your favorite TV show, virtualization has made the scene at LinuxWorld Boston 2006 this week, as vendor after vendor made big announcements on that which is virtual.
XenSource was the clear winner in this arena, with Novell, Red Hat, and Virtual Iron all announcing integration of the Xen paravirtualization platform in their upcoming releases. SUSE Linux Enterprise Server 10 will have it first, followed later this year by Red Hat and Virtual Iron. Needless to say, the folks at XenSource are feeling pretty good.
So, too, is the crew of SWsoft, which won the popular vote of the show for Virtuozzo in the Product Excellence awards. VMware also announced that it is sharing its core virtual machine format and specification license free. Heck, even Microsoft got into the act, announcing it will be give away Virtual Server 2005. Whenever it ships, of course.
It wasn't all about virtualization, though. Cool things are happening in the messaging arena, with Scalix and OpenXchange server generating some show buzz.
The show itself was really slow this year, with much lower energy and attendance. Open Source guru Bruce Perens groused in this annual State of the Open Source press conference that this meant that clearly the decision by show organizers to shift the show from New York to Boston was a bad idea. We might go one step further: We are wondering if two LinuxWorlds in the United States are a good idea. period. And it's not just us.
One thing we noticed on the show floor this year was the distinct lack of certain vendors. Namely, IBM and HP, who usually have some sort of show floor presence at LinuxWorld.
Not this time. Oh, they're here. Each company has had some sort of customer fete, and IBM has its usual reserved meeting rooms to meet with press, customers, and the like. But no mega-giant booths dominating the ground and airspace here at the BCEC.
But here's what is really odd about this. We knew beforehand HP wasn't coming to the floor. It had cited expense as the major reason. According to the person we spoke with, a company like HP can plunk up to $2 million down on a show trip — that's booth materials, floor space, marketing, travel, and lodging. We can relate to the need to save that kind of money.
IBM's lack of floor presence may be related to this — after all, $2 million isn't exactly pocket change. But we were surprised to hear a source from IBM confirm a rumor we'd heard. Apparently, it was his understanding that IBM and HP has entered a gentlemens' agreement regarding floor attendance, or lack thereof.
Was this a savings pull-out? Or are HP and IBM wondering what we are all starting to wonder: What's the real draw of LinuxWorld? It's clearly no longer a developer/community show. One executive told us he thinks it should be a show for the customers. He might have a point. The question is, can IDG create the kind of energy and program to attract those Linux customers in?

Blogger Help : How can I create expandable post summaries?

This is the manual to create expandable post summaries in my blogger. Remember that you must start the remaining part of your post with this tag:



and close it with this tag:



That's all!

Thursday, April 06, 2006

What is MBone?

From Wikipedia, the free encyclopedia

Mbone (short for "multicast backbone") is an experimental backbone for multicast traffic across the Internet. Since most routers in Internet do not yet support multicast, the Mbone has evolved to connect multicast-capable networks over the existing Internet infrastructure. It is anticipated that the Mbone will eventually become obsolete as more routers understand and forward multicast traffic, which is a standard feature in IPv6. A major difficulty with the commercialization of multicast routers is that it is more difficult for an ISP to compute charges for multicast traffic.
The Mbone is currently of practical use for shared communication such as videoconferences or shared collaborative workspaces. The Mbone is not generally connected to Internet Service Providers but is often connected to universities and research institutions. Some other projects and network testbeds, such as Internet2's Abilene Network, have made Mbone obsolete.

BSD: The Other Free UNIX Family

Date: Jan 20, 2006 By David Chisnall.

There are a lot of options in the Free UNIX market at the moment. Everyone's favorite buzzword is Linux, and Sun is in the process of releasing Solaris under a Free Software license. One family, however, receives less attention than it is due. Berkeley Software Distribution (BSD) has grown into almost a complete replacement for UNIX, with numerous enhancements. David Chisnall explains why the BSD family has found its way into a large number of systems and what these systems can do for you.

There are a lot of options in the Free UNIX market at the moment. Everyone’s favorite buzzword is Linux, and Sun is in the process of releasing Solaris under a Free Software license. One family, however, receives less attention than it is due.
On March 9, 1978, the University of California at Berkeley released a set of patches to the Sixth Edition of the UNIX Timesharing System. These patches were licensed very permissively—you could do pretty much anything with them, but you had to state that a product that used them did so. The advertising clause was later dropped, and any distribution in source or binary form was allowed, providing the copyright notice was retained.

The name of this patch set was the Berkeley Software Distribution—BSD for short—and it gradually grew into almost a complete replacement for UNIX, with numerous enhancements.
A few years later, the owners of the UNIX original copyright decided to try to cash in on their system’s success, and sued UCB. The upshot was a small number of files containing original UNIX code were rewritten, and a completely unencumbered version of BSD—UNIX being dropped from the name for trademark reasons.

In the early ’90s, Intel released a microprocessor that was capable of running a real operating system. The Intel 386 included features such as support for paged virtual memory, so it became a potential target for running BSD. In 1991, Bill Jolitz released 386 BSD and then neglected the project. A group of people, frustrated by the difficulty of getting patches accepted to 386 BSD, began distributing a patch set and then a complete system known as FreeBSD.

At about the same time, BSD Networking Release/2 (one of the last releases by UCB) was adopted by a group known as NetBSD. While the FreeBSD team focused on supporting the Intel 386, the NetBSD team was keen to retain the portable nature of the original BSD code.
In 1995, a clash of personalities lead to one of the NetBSD core developers, Theo De Raadt, forking the project and creating OpenBSD. OpenBSD, being based in Canada, was not subject to the stringent export laws that the USA placed on cryptography at the time, and so became a popular operating system among the security-conscious. This lead to a thorough code review, which found a large number of bugs and security holes in the code imported from NetBSD. This code review is an ongoing part of the OpenBSD development process and allows them to boast an excellent security record.

Over the years, BSD code has found its way into a large number of systems. Many commercial UNIX variants began as forks of BSD code, and a BSD TCP/IP stack was used in earlier versions of Windows. BSD was also very popular in academia. One project, the Mach Microkernel at CMU, used a modified version of BSD to run UNIX programs. The Mach project was used by a company called NeXT as the foundation for their operating system. When NeXT was bought by Apple, a lot of the old BSD code was replaced with code from the NetBSD and FreeBSD projects. Mac OS X can be thought of as a close cousin to the BSD family: Although it uses Mach as an abstraction layer, much of the kernel is BSD-derived.

It is worth noting that the BSDs are complete systems. Linux is just a kernel, and to be useful it is usually combined with the GNU userland. The BSDs include their own userland—although some parts, such as the compiler, are imported from the GNU project. A BSD system can be installed with no third-party applications—and work. It is more common, however, to add additional packages such as X.org and a desktop environment (the same applications traditionally run atop Linux).

The FreeBSD project underwent some radical changes between versions 4 and 5. Much of the kernel was redesigned to better support multiprocessor systems. One developer, Matt Dillon, was unhappy with the direction it was going, so he set up Dragonfly BSD, a fork of FreeBSD 4. While FreeBSD 5 and later use a system of shared resources and locks, Dragonfly BSD uses message passing between concurrent threads—a process common on microkernel operating systems including Amiga OS (where Matt Dillon first made a name for himself).
Dragonfly BSD is designed as a cluster operating system, and should be able to be installed on a cluster of machines, presenting the appearance of a single large multiprocessor machine to end users.

FreeBSD

FreeBSD is the most popular of the BSD family. It is traditionally known for stability and performance. Many web servers are still around running versions of FreeBSD from years ago without a reboot. FreeBSD is developed in three branches: -CURRENT, -STABLE, and -RELEASE. Anything new is added to -CURRENT, which may or may not work at any given time. Once a new feature has undergone testing by the development team, it is added to -STABLE. Periodically, a release is created. These releases have a version number and their own branch in the CVS tree. Only bug fixes are allowed to be introduced into -RELEASE branches—no new features. This makes tracking a -RELEASE branch the thing to do if you want a completely stable system.

FreeBSD development underwent something of a hiccup around version 5. The release schedule was feature-based, and a large number of new features were planned. Gradually, the release date for FreeBSD 5 slipped farther and farther back. During this time, the project moved to the same six-month release schedule as NetBSD and OpenBSD.
The current release version is 6.0, which is the system used on the laptop on which this article is being written. The 5.x series was highly ambitious and the lack of immediate success gave it a reputation for being unstable and slow—the lack of speed coming from the large quantities of debugging code found in releases.

The 6.x series is intended to avoid the stigma associated with the 5.x series. One of the most noticeable improvements is the new scheduler, known as ULE. ULE is not enabled by default because it does not achieve quite as good throughput as the traditional 4BSD scheduler, making it worse for server roles. For desktop (or laptop) use, it is much better. ULE prioritizes processes that spend most of their time waiting: interactive processes. On this somewhat aging laptop, it is possible to do a large compile in the background without any loss of responsiveness in X applications.

Installation of third-party software is done using the ports system. Each port is a Makefile, containing the files that must be downloaded to build the program and a set of patches to make it run on FreeBSD. The ports system will automatically resolve dependencies when installing programs.

Every port can be compiled into a binary package, and there are copies available from the FreeBSD FTP mirrors (although they often lag behind the port version by several days).
For the few closed-source programs that require Linux, FreeBSD includes a Linux ABI compatibility layer, which translates system call vectors into their equivalent on FreeBSD. It also includes a Linux-style /proc file system for programs that depend on it. Shared libraries used by Linux programs can also be installed—the ports tree contains copies of the basic packages found in several popular Linux distributions.

FreeBSD has a couple of features that make it attractive for home users. First, nVidia release graphics drivers for it, giving it the same level of 3D acceleration available to Linux on nVidia hardware. Second, it includes Project Evil, a reimplementation of the NDIS driver API used by wireless networking cards on Windows, allowing many WiFi cards to be used without direct hardware support.

Project Evil is also being ported to NetBSD, which also has Linux ABI support. Note that ABI support is not full emulation. All UNIX systems have a set of system calls—functions handled by the kernel—which are all assigned numbers. These numbers and the system call arguments vary depending on the kernel. The ABI compatibility layer simply remaps the arguments and changes the number, giving almost no performance penalty—in some cases, even faster performance than native due to a better kernel implementation. Linux is not the only non-native ABI supported by these systems—NetBSD even includes a rudimentary Darwin ABI that allows some OS X applications to run.

NetBSD

The NetBSD tag line is "of course it runs NetBSD," and a long-running joke was that NetBSD was the OS you ran on your toaster. This ceased to be a joke a few months ago when the NetBSD team proudly demonstrated a toaster running their OS.

BSD has traditionally been easy to port to new architectures. The system contains a few platform-specific files, and the rest of the code uses an abstraction layer to implement features. The memory subsystem, for example, has a handful of (simple) platform-dependent functions, with the vast majority being implemented in a cross-platform way.

NetBSD builds on this. NetBSD 2.1 was released with support for 48 architectures. In the context of NetBSD, support for an architecture means that a boot-loader exists, can boot the kernel, and all of the userland components in the base system work.

NetBSD makes extensive use of cross-compilation. The entire system can be built on any platform that runs NetBSD (and some others) for any hardware running NetBSD. For example, it is possible to use a fast Windows machine to build NetBSD using the Cygwin POSIX emulation layer to install it on an old m68K Mac that would take a week or two to built the system itself. Thus, it is easy to test NetBSD on less-powerful hardware because every change can be compiled easily on a faster machine.

Recently, NetBSD has begun to focus more on performance. The new threading system is based on an N:M model, where the kernel schedules N threads, and a userspace scheduler multiplexes this to M user-visible threads. This is the same model used by Solaris, which is well-known for its multithreading performance, and similar to that used by FreeBSD 5 and later.
Many people who don’t use NetBSD are familiar with it because of pkgsrc, the system used for distributing third-party applications. pkgsrc runs on a number of other operating systems, including the Solaris and Mac OS X, and the NetBSD team recently received a donation from Sun to help the Solaris support.

The focus on portability gives NetBSD a very clean code base, making it a good system to study in an academic setting. One side effect is that it is often chosen as a base when implementing experimental new features. If these features are successful, they usually end up being ported to the other BSDs.

OpenBSD

OpenBSD claims to be "secure by default," and anyone who runs it gets a warm fuzzy feeling reading security advisories in popular programs with "does not affect OpenBSD" written at the bottom of the list of affected platforms.
The entire base system—kernel and userland—in OpenBSD undergoes a constant process of code review. When a new bug is discovered, the next step is to categorize it and search the rest of the tree for occurrences of the same type of bug.

This stringent checking is nice, but it applies only to the base system. Any third-party applications installed are not checked. To help reduce the problems, OpenBSD includes a number of security features.

No memory allocated by an OpenBSD program can be both writable and executable at the same time. Most operating systems support this on the x86 latest chips that support the NX bit, but OpenBSD supports it using the segment based protection mechanisms that have been available in all x86 chips for over a decade—and on other architectures.

A random gap is inserted between stack frames, making stack-smashing exploits much harder. This is combined with a "canary" value that is placed at the bottom of each stack frame and checked before returning to ensure that it has not been tampered with.

Allocated memory is inserted at a random location in a process’s address space, making it much harder for an attacker to guess where something interesting lies. There are many other mechanisms provided, and enabled by default, for ensuring an OpenBSD box remains secure.
Besides these features, OpenBSD pioneered a number of security features now found on other systems, such as SSH and sudo. The OpenBSD version of Apache, for example, runs in a chroot jail by default, so attackers who compromise Apache cannot do anything other than break the web server—they can’t even modify the contents of the web site on disk.

The downside is that badly written applications are more likely to crash on OpenBSD—something the authors consider vastly better than being exploited. Recent security enhancements to the memory allocation system resulted in X crashing regularly on OpenBSD. Further investigation revealed a bug that had existed for more than a decade in the X code, which had undoubtedly caused countless unexplained crashes.

OpenBSD, like NetBSD, is a very clean system and is designed to be easy to administer from the command line. While all of the BSDs have good documentation (the FreeBSD handbook in particular), the OpenBSD man pages are first rate. Nothing is allowed into the OpenBSD system that is user-visible unless it is accompanied by documentation, and the man pages are considered authoritative. The first place to look for documentation on OpenBSD is the manual; if it’s not there, there’s probably a bug.

The superb documentation available in the BSD community tends to make people who ask questions that are easily answered very unpopular. It is common for a question posted about a BSD system to be answered with a one-line reply instructing the asker to read the manual, This is good advice; the answer is usually there.

The OpenBSD packages system is currently the worst of the bunch. For a long time, upgrading the base system (a new release of which comes around every six months) meant deleting all installed packages and starting again. OpenBSD 3.7 added support for upgrading, but only of individual packages. OpenBSD 3.8 provided a full update facility, although it is considered experimental. OpenBSD 3.9 is expected to bring the package management system up to a more competitive level.

Others

There are a few other systems built on top of the main BSDs. PC BSD, which is a modified version of FreeBSD, is aimed more at the desktop. It features a graphical installer and a set of default packages targeted at a desktop user. M0n0wall is another FreeBSD-derived system designed for use in firewalls.

There are also a few commercial BSD-derived products available. Contrary to the belief prevalent in the Linux community, many of them do contribute code back to the parent projects, even though the license does not require them to. BSDI, the company behind BSD/OS, was a long-time financial supporter of the FreeBSD project.

All the BSDs have the cohesive feel that comes only from the kernel, userland, and documentation being written by the same people, but they choice of which to use is a matter of personal taste. On a low-volume server, the security and ease of use of OpenBSD can be attractive. On a laptop, the hardware support of FreeBSD can be more attractive. On anything else, NetBSD may be the only choice, and on other hardware, the ease of package management and the lightweight design might make it a better choice. Code is frequently shared between the systems, so a feature you like in one is likely to eventually make its way into the other two.

The Future of Multicast: Source Specific Multicast (SSM)

Viewpoint by Dr. Kevin C. Almeroth

In writing each installment of a Viewpoint, I typically go back and review past Viewpoints. My goal in these writings has always been to offer some sound technical advice plus a prediction for the future. I am reasonably confident I can offer good technical advice, but I am a little nervous on the prediction side of things. My goal here is not to make some silly statement like poor ol' Bill Gates did when predicting that there would only ever be a demand for a handful of computers (albeit room-sized at the time) in the early 1980s. With all this in mind, I am going to make a bold statement: the recent development of Source Specific Multicast (SSM) is going to fundamentally change the nature, perception, demand, and impact of multicast.

Before getting into the technical discussion of exactly what SSM is, let me give some background. Obviously there is a growing demand for one-to-many data delivery. But something has been keeping IP Multicast back. That something is a gap between what the deployment folks are used to and need, and what the standards/technology groups like the IETF are producing. The key issues are protocol complexity, traffic management, address allocation, security, pricing models, etc. In defense of the IETF, they are doing their job -- they are working to define the protocol standards. The REAL gap exists between these standards and efforts to develop a working infrastructure. Some ISPs have put themselves on the cutting edge and are working hard to deploy multicast. But, there is not yet a critical mass. Critical mass and solutions to EACH of the key issues are needed before multicast becomes a mainstream solution.
Solutions to some of the key issues COULD be straightforward. For example, with respect to billing, make multicast free. Revenue will be generated by the ability to support the next-generation of applications. While some ISPs are moving in this direction, others are stalling deployment until they can figure out how to make money directly (not indirectly like the above example). With respect to address allocation, there is the GLOP RFC, but this is more of a theoretical solution. Dividing a single /8 (2^24 addresses) among 2^16 AS numbers so that each AS gets a /24 (2^8 addresses) works well in theory, but not in practice. GLOP could work well if we had IPv6 but not in the current IPv4 Internet. The real excitement recently has been generated by expedited efforts to develop a new model for multicast called Source Specific Multicast (SSM).
In describing SSM, the first goal is to avoid confusion. So, let's start with terminology. Several acronyms have been proposed and some are still floating around. Terms like PIM-SS, where ''SS'' either stands for Source Specific or Single Source have been proposed. Or, just ''SS'' has also been proposed. The main confusion arises from whether SS stands for ''Source Specific'' or ''Single Source''. The main consensus now is that it is Source Specific. But that does not mean Single Source is done yet. In fact, SSM in theory does not only imply a single source. Rather, SSM could have multiple sources. A SSM group with only one source is also possible. In fact, yet another /8 (the 232/8 range) has been allocated for single source applications. One final point: a single source application does not imply SSM. A single source application could easily be (and currently is being) supported by the existing infrastructure. Got all that?The second place to start in describing SSM is a bit of history and a number of acknowledgement for those who first got the multicast community thinking in this direction. Personally, I believe that SSM evolved with major influences from two other directions: Simple Multicast (SM) and Express Multicast. Both SM and Express were offered at a time when the triumvirate of multicast routing protocols (PIM-SM/MBGP/MSDP) where seen as too complex. However, both SM and Express were rejected on the premise that they did not solve ALL problems, and as such, would require a wholesale replacement of the existing multicast infrastructure. While the community occasionally was able to debate the pure technical merits of these protocols, too much time was spent debating whether junking the existing infrastructure, which technically does what it is supposed to do, was going to do more harm than good. Out of all of this, SSM appeared. It had the benefits of some of the newer proposals, similarities to existing protocols (for interoperability) and a great deal of simplicity. However, there is a cost for what seems like a win-win-win situation. The cost is a fundamental change to the multicast service model. No longer can a receiver join a multicast group by only passing the multicast group address to the operating system. Now, the receiver must explicitly know the set of sources. While this may or may not be a big deal, it has certainly created a great deal of debate.
So why is SSM that much better? Fundamentally, it moves the problem of ``identifying sources to receivers'' to the application layer. Instead of using a flooding technique like the dense mode protocols or a core/rendezvous technique like the sparse mode protocols, SSM requires receivers to know who the sources are. Then a receiver passes to the network the source (and group) address. The network then sends a join message towards the source. Reverse shortest path trees are built efficiently and without the need for core/rendezvous points. Furthermore, there is no requirement for the Multicast Source Discovery Protocol (MSDP) to run between domains--sources do not need to be ``discovered'', they are already known. And there is still more good news: relatively simple modifications to edge routers, no changes to core routers running PIM-SM, and co-existence with the existing infrastructure. The challenges created by SSM are not technical ones, but deployment ones.
SSM essentially changes the IP multicast service model. The problem is that it changes how applications interact with the operating system and thus the network. First, an application now has to learn who the sources are. This can easily be accomplished via a WWW page or some other service, but it still requires changes to the application. The source might also have to keep track of dynamic sources--sources who come and go over the duration of a session. Applications then need to pass this information to the operating system (kernel), so there needs to be a change in the API. Obviously this requires changes to the operating system. Additional operating system changes are also necessary because the operating system passes this information to the network. IGMPv3 standardizes the necessary functionality but IGMPv3 has yet to be fully standardized (though it should soon be done). The bottom line is that SSM has a great deal of simplicity but progress will be slowed by the need to change existing pieces.
And so, with the technical discussion aside, back to the prediction. Because SSM offers a fundamental change that has so many advantages, and because the changes are significant and yet achievable, I believe SSM will have a dramatic impact on the perception that multicast is a usable service. ISPs and the Internet community will soon not be able to continue ignoring the performance scalability of network-based, one-to-many packet delivery. Knocking down the technical barrier will force us solve some of the other problems. Until now all of these problems have been lumped into a mass that looks formidable. Hopefully now we can attack them one at a time and dispatch them more easily.
sdFinally, just like Hollywood movies that always leave the door open for a sequel, I have subtly inserted my teaser. It was use of the term ``network-based''. What about all this talk about application-layer multicast? Stay tuned...

Kevin C. Almeroth earned his Ph.D. in Computer Science from the Georgia Institute of Technology in 1997. He is currently an assistant professor at the University of California in Santa Barbara where his main research interests include computer networks and protocols, multicast communication, large-scale multimedia systems, and performance evaluation. At UCSB, Dr. Almeroth is a founding member of the Media Arts and Technology Program (MATP), Associate Director of the Center for Information Technology and Society (CITS), and on the Executive Committee for the University of California Digital Media Innovation (DiMI) program. In the research community, Dr. Almeroth is on the Editorial Board of IEEE Network, is co-chairing the NGC 2000 workshop, has served as tutorial chair for several conferences, and has been on the program committee of numerous conferences. Dr. Almeroth is serving as the chair of the Internet2 Working Group on Multicast, is a member of the IETF Multicast Directorate (MADDOGS), and is a senior technologist for the IP Multicast Initiative (IPMI). He has been a member of both the ACM and IEEE since 1993. You can reach him at almeroth@cs.ucsb.edu.