The Future of Multicast: Source Specific Multicast (SSM)
Viewpoint by Dr. Kevin C. Almeroth
In writing each installment of a Viewpoint, I typically go back and review past Viewpoints. My goal in these writings has always been to offer some sound technical advice plus a prediction for the future. I am reasonably confident I can offer good technical advice, but I am a little nervous on the prediction side of things. My goal here is not to make some silly statement like poor ol' Bill Gates did when predicting that there would only ever be a demand for a handful of computers (albeit room-sized at the time) in the early 1980s. With all this in mind, I am going to make a bold statement: the recent development of Source Specific Multicast (SSM) is going to fundamentally change the nature, perception, demand, and impact of multicast.
Before getting into the technical discussion of exactly what SSM is, let me give some background. Obviously there is a growing demand for one-to-many data delivery. But something has been keeping IP Multicast back. That something is a gap between what the deployment folks are used to and need, and what the standards/technology groups like the IETF are producing. The key issues are protocol complexity, traffic management, address allocation, security, pricing models, etc. In defense of the IETF, they are doing their job -- they are working to define the protocol standards. The REAL gap exists between these standards and efforts to develop a working infrastructure. Some ISPs have put themselves on the cutting edge and are working hard to deploy multicast. But, there is not yet a critical mass. Critical mass and solutions to EACH of the key issues are needed before multicast becomes a mainstream solution.
Solutions to some of the key issues COULD be straightforward. For example, with respect to billing, make multicast free. Revenue will be generated by the ability to support the next-generation of applications. While some ISPs are moving in this direction, others are stalling deployment until they can figure out how to make money directly (not indirectly like the above example). With respect to address allocation, there is the GLOP RFC, but this is more of a theoretical solution. Dividing a single /8 (2^24 addresses) among 2^16 AS numbers so that each AS gets a /24 (2^8 addresses) works well in theory, but not in practice. GLOP could work well if we had IPv6 but not in the current IPv4 Internet. The real excitement recently has been generated by expedited efforts to develop a new model for multicast called Source Specific Multicast (SSM).
In describing SSM, the first goal is to avoid confusion. So, let's start with terminology. Several acronyms have been proposed and some are still floating around. Terms like PIM-SS, where ''SS'' either stands for Source Specific or Single Source have been proposed. Or, just ''SS'' has also been proposed. The main confusion arises from whether SS stands for ''Source Specific'' or ''Single Source''. The main consensus now is that it is Source Specific. But that does not mean Single Source is done yet. In fact, SSM in theory does not only imply a single source. Rather, SSM could have multiple sources. A SSM group with only one source is also possible. In fact, yet another /8 (the 232/8 range) has been allocated for single source applications. One final point: a single source application does not imply SSM. A single source application could easily be (and currently is being) supported by the existing infrastructure. Got all that?The second place to start in describing SSM is a bit of history and a number of acknowledgement for those who first got the multicast community thinking in this direction. Personally, I believe that SSM evolved with major influences from two other directions: Simple Multicast (SM) and Express Multicast. Both SM and Express were offered at a time when the triumvirate of multicast routing protocols (PIM-SM/MBGP/MSDP) where seen as too complex. However, both SM and Express were rejected on the premise that they did not solve ALL problems, and as such, would require a wholesale replacement of the existing multicast infrastructure. While the community occasionally was able to debate the pure technical merits of these protocols, too much time was spent debating whether junking the existing infrastructure, which technically does what it is supposed to do, was going to do more harm than good. Out of all of this, SSM appeared. It had the benefits of some of the newer proposals, similarities to existing protocols (for interoperability) and a great deal of simplicity. However, there is a cost for what seems like a win-win-win situation. The cost is a fundamental change to the multicast service model. No longer can a receiver join a multicast group by only passing the multicast group address to the operating system. Now, the receiver must explicitly know the set of sources. While this may or may not be a big deal, it has certainly created a great deal of debate.
So why is SSM that much better? Fundamentally, it moves the problem of ``identifying sources to receivers'' to the application layer. Instead of using a flooding technique like the dense mode protocols or a core/rendezvous technique like the sparse mode protocols, SSM requires receivers to know who the sources are. Then a receiver passes to the network the source (and group) address. The network then sends a join message towards the source. Reverse shortest path trees are built efficiently and without the need for core/rendezvous points. Furthermore, there is no requirement for the Multicast Source Discovery Protocol (MSDP) to run between domains--sources do not need to be ``discovered'', they are already known. And there is still more good news: relatively simple modifications to edge routers, no changes to core routers running PIM-SM, and co-existence with the existing infrastructure. The challenges created by SSM are not technical ones, but deployment ones.
SSM essentially changes the IP multicast service model. The problem is that it changes how applications interact with the operating system and thus the network. First, an application now has to learn who the sources are. This can easily be accomplished via a WWW page or some other service, but it still requires changes to the application. The source might also have to keep track of dynamic sources--sources who come and go over the duration of a session. Applications then need to pass this information to the operating system (kernel), so there needs to be a change in the API. Obviously this requires changes to the operating system. Additional operating system changes are also necessary because the operating system passes this information to the network. IGMPv3 standardizes the necessary functionality but IGMPv3 has yet to be fully standardized (though it should soon be done). The bottom line is that SSM has a great deal of simplicity but progress will be slowed by the need to change existing pieces.
And so, with the technical discussion aside, back to the prediction. Because SSM offers a fundamental change that has so many advantages, and because the changes are significant and yet achievable, I believe SSM will have a dramatic impact on the perception that multicast is a usable service. ISPs and the Internet community will soon not be able to continue ignoring the performance scalability of network-based, one-to-many packet delivery. Knocking down the technical barrier will force us solve some of the other problems. Until now all of these problems have been lumped into a mass that looks formidable. Hopefully now we can attack them one at a time and dispatch them more easily.
sdFinally, just like Hollywood movies that always leave the door open for a sequel, I have subtly inserted my teaser. It was use of the term ``network-based''. What about all this talk about application-layer multicast? Stay tuned...
Kevin C. Almeroth earned his Ph.D. in Computer Science from the Georgia Institute of Technology in 1997. He is currently an assistant professor at the University of California in Santa Barbara where his main research interests include computer networks and protocols, multicast communication, large-scale multimedia systems, and performance evaluation. At UCSB, Dr. Almeroth is a founding member of the Media Arts and Technology Program (MATP), Associate Director of the Center for Information Technology and Society (CITS), and on the Executive Committee for the University of California Digital Media Innovation (DiMI) program. In the research community, Dr. Almeroth is on the Editorial Board of IEEE Network, is co-chairing the NGC 2000 workshop, has served as tutorial chair for several conferences, and has been on the program committee of numerous conferences. Dr. Almeroth is serving as the chair of the Internet2 Working Group on Multicast, is a member of the IETF Multicast Directorate (MADDOGS), and is a senior technologist for the IP Multicast Initiative (IPMI). He has been a member of both the ACM and IEEE since 1993. You can reach him at almeroth@cs.ucsb.edu.
In writing each installment of a Viewpoint, I typically go back and review past Viewpoints. My goal in these writings has always been to offer some sound technical advice plus a prediction for the future. I am reasonably confident I can offer good technical advice, but I am a little nervous on the prediction side of things. My goal here is not to make some silly statement like poor ol' Bill Gates did when predicting that there would only ever be a demand for a handful of computers (albeit room-sized at the time) in the early 1980s. With all this in mind, I am going to make a bold statement: the recent development of Source Specific Multicast (SSM) is going to fundamentally change the nature, perception, demand, and impact of multicast.
Before getting into the technical discussion of exactly what SSM is, let me give some background. Obviously there is a growing demand for one-to-many data delivery. But something has been keeping IP Multicast back. That something is a gap between what the deployment folks are used to and need, and what the standards/technology groups like the IETF are producing. The key issues are protocol complexity, traffic management, address allocation, security, pricing models, etc. In defense of the IETF, they are doing their job -- they are working to define the protocol standards. The REAL gap exists between these standards and efforts to develop a working infrastructure. Some ISPs have put themselves on the cutting edge and are working hard to deploy multicast. But, there is not yet a critical mass. Critical mass and solutions to EACH of the key issues are needed before multicast becomes a mainstream solution.
Solutions to some of the key issues COULD be straightforward. For example, with respect to billing, make multicast free. Revenue will be generated by the ability to support the next-generation of applications. While some ISPs are moving in this direction, others are stalling deployment until they can figure out how to make money directly (not indirectly like the above example). With respect to address allocation, there is the GLOP RFC, but this is more of a theoretical solution. Dividing a single /8 (2^24 addresses) among 2^16 AS numbers so that each AS gets a /24 (2^8 addresses) works well in theory, but not in practice. GLOP could work well if we had IPv6 but not in the current IPv4 Internet. The real excitement recently has been generated by expedited efforts to develop a new model for multicast called Source Specific Multicast (SSM).
In describing SSM, the first goal is to avoid confusion. So, let's start with terminology. Several acronyms have been proposed and some are still floating around. Terms like PIM-SS, where ''SS'' either stands for Source Specific or Single Source have been proposed. Or, just ''SS'' has also been proposed. The main confusion arises from whether SS stands for ''Source Specific'' or ''Single Source''. The main consensus now is that it is Source Specific. But that does not mean Single Source is done yet. In fact, SSM in theory does not only imply a single source. Rather, SSM could have multiple sources. A SSM group with only one source is also possible. In fact, yet another /8 (the 232/8 range) has been allocated for single source applications. One final point: a single source application does not imply SSM. A single source application could easily be (and currently is being) supported by the existing infrastructure. Got all that?The second place to start in describing SSM is a bit of history and a number of acknowledgement for those who first got the multicast community thinking in this direction. Personally, I believe that SSM evolved with major influences from two other directions: Simple Multicast (SM) and Express Multicast. Both SM and Express were offered at a time when the triumvirate of multicast routing protocols (PIM-SM/MBGP/MSDP) where seen as too complex. However, both SM and Express were rejected on the premise that they did not solve ALL problems, and as such, would require a wholesale replacement of the existing multicast infrastructure. While the community occasionally was able to debate the pure technical merits of these protocols, too much time was spent debating whether junking the existing infrastructure, which technically does what it is supposed to do, was going to do more harm than good. Out of all of this, SSM appeared. It had the benefits of some of the newer proposals, similarities to existing protocols (for interoperability) and a great deal of simplicity. However, there is a cost for what seems like a win-win-win situation. The cost is a fundamental change to the multicast service model. No longer can a receiver join a multicast group by only passing the multicast group address to the operating system. Now, the receiver must explicitly know the set of sources. While this may or may not be a big deal, it has certainly created a great deal of debate.
So why is SSM that much better? Fundamentally, it moves the problem of ``identifying sources to receivers'' to the application layer. Instead of using a flooding technique like the dense mode protocols or a core/rendezvous technique like the sparse mode protocols, SSM requires receivers to know who the sources are. Then a receiver passes to the network the source (and group) address. The network then sends a join message towards the source. Reverse shortest path trees are built efficiently and without the need for core/rendezvous points. Furthermore, there is no requirement for the Multicast Source Discovery Protocol (MSDP) to run between domains--sources do not need to be ``discovered'', they are already known. And there is still more good news: relatively simple modifications to edge routers, no changes to core routers running PIM-SM, and co-existence with the existing infrastructure. The challenges created by SSM are not technical ones, but deployment ones.
SSM essentially changes the IP multicast service model. The problem is that it changes how applications interact with the operating system and thus the network. First, an application now has to learn who the sources are. This can easily be accomplished via a WWW page or some other service, but it still requires changes to the application. The source might also have to keep track of dynamic sources--sources who come and go over the duration of a session. Applications then need to pass this information to the operating system (kernel), so there needs to be a change in the API. Obviously this requires changes to the operating system. Additional operating system changes are also necessary because the operating system passes this information to the network. IGMPv3 standardizes the necessary functionality but IGMPv3 has yet to be fully standardized (though it should soon be done). The bottom line is that SSM has a great deal of simplicity but progress will be slowed by the need to change existing pieces.
And so, with the technical discussion aside, back to the prediction. Because SSM offers a fundamental change that has so many advantages, and because the changes are significant and yet achievable, I believe SSM will have a dramatic impact on the perception that multicast is a usable service. ISPs and the Internet community will soon not be able to continue ignoring the performance scalability of network-based, one-to-many packet delivery. Knocking down the technical barrier will force us solve some of the other problems. Until now all of these problems have been lumped into a mass that looks formidable. Hopefully now we can attack them one at a time and dispatch them more easily.
sdFinally, just like Hollywood movies that always leave the door open for a sequel, I have subtly inserted my teaser. It was use of the term ``network-based''. What about all this talk about application-layer multicast? Stay tuned...
Kevin C. Almeroth earned his Ph.D. in Computer Science from the Georgia Institute of Technology in 1997. He is currently an assistant professor at the University of California in Santa Barbara where his main research interests include computer networks and protocols, multicast communication, large-scale multimedia systems, and performance evaluation. At UCSB, Dr. Almeroth is a founding member of the Media Arts and Technology Program (MATP), Associate Director of the Center for Information Technology and Society (CITS), and on the Executive Committee for the University of California Digital Media Innovation (DiMI) program. In the research community, Dr. Almeroth is on the Editorial Board of IEEE Network, is co-chairing the NGC 2000 workshop, has served as tutorial chair for several conferences, and has been on the program committee of numerous conferences. Dr. Almeroth is serving as the chair of the Internet2 Working Group on Multicast, is a member of the IETF Multicast Directorate (MADDOGS), and is a senior technologist for the IP Multicast Initiative (IPMI). He has been a member of both the ACM and IEEE since 1993. You can reach him at almeroth@cs.ucsb.edu.


0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home