EEE COMMUNICATIONS SURVEYS TUTORIALS.VOL.16.NO.3.THIRD QUARTER 2014 1617 A Survey of Software-Defined Networking:Past, Present,and Future of Programmable Networks Bruno Astuto A.Nunes,Marc Mendonca,Xuan-Nam Nguyen,Katia Obraczka,and Thierry Turletti Abstract-The idea of programmable networks has recently Defined Networking (SDN)is a new networking paradigm re-gained considerable momentum due to the emergence of in which the forwarding hardware is decoupled from con- the Software-Defined Networking (SDN)paradigm.SDN,often trol decisions.It promises to dramatically simplify network referred to as a "radical new idea in networking",promises to dramatically simplify network management and enable in- management and enable innovation and evolution.The main novation through network programmability.This paper surveys idea is to allow software developers to rely on network the state-of-the-art in programmable networks with an emphasis resources in the same easy manner as they do on storage on SDN.We provide a historic perspective of programmable and computing resources.In SDN,the network intelligence is networks from early ideas to recent developments.Then we logically centralized in software-based controllers (the control present the SDN architecture and the OpenFlow standard in particular,discuss current alternatives for implementation and plane),and network devices become simple packet forwarding testing of SDN-based protocols and services,examine current devices(the data plane)that can be programmed via an open and future SDN applications,and explore promising research interface (e.g.,ForCES [1],OpenFlow [2],etc). directions based on the SDN paradigm. SDN is currently attracting significant attention from both Index Terms-Software-Defined Networking,programmable academia and industry.A group of network operators,ser- networks,survey,data plane,control plane,virtualization. vice providers,and vendors have recently created the Open Network Foundation [3],an industrial-driven organization,to I.INTRODUCTION promote SDN and standardize the OpenFlow protocol [2].On YOMPUTER networks are typically built from a large the academic side,the OpenFlow Network Research Center [4] number of network devices such as routers.switches and has been created with a focus on SDN research.There have numerous types of middleboxes (i.e.,devices that manipulate also been standardization efforts on SDN at the IETF and IRTF traffic for purposes other than packet forwarding,such as a and other standards producing organizations. firewall)with many complex protocols implemented on them. The field of software defined networking is quite recent, Network operators are responsible for configuring policies to yet growing at a very fast pace.Still,there are important respond to a wide range of network events and applications. research challenges to be addressed.In this paper,we survey They have to manually transform these high level-policies into the state-of-the-art in programmable networks by providing a low-level configuration commands while adapting to changing historic perspective of the field and also describing in detail network conditions.Often,they also need to accomplish these the SDN paradigm and architecture.The paper is organized very complex tasks with access to very limited tools.As a as follows:in Section II,it begins by describing early efforts result,network management and performance tuning is quite focusing on programmable networks.Section III provides an challenging and thus error-prone.The fact that network de- overview of SDN and its architecture.It also describes the vices are usually vertically-integrated black boxes exacerbates OpenFlow protocol.Section IV describes existing platforms the challenge network operators and administrators face. for developing and testing SDN solutions including emulation Another almost unsurmountable challenge network practi- and simulation tools,SDN controller implementations,as well tioners and researchers face has been referred to as"Internet as verification and debugging tools.In Section V,we discuss ossification".Because of its huge deployment base and the several SDN applications in areas such as data centers and fact it is considered part of our society's critical infrastructure wireless networking.Finally,Section VI discusses research (just like transportation and power grids),the Internet has challenges and future directions. become extremely difficult to evolve both in terms of its phys- ical infrastructure as well as its protocols and performance. II.EARLY PROGRAMMABLE NETWORKS However,as current and emerging Internet applications and SDN has great potential to change the way networks oper- services become increasingly more complex and demanding, ate,and OpenFlow in particular has been touted as a"radical it is imperative that the Internet be able to evolve to address new idea in networking"[5].The proposed benefits range these new challenges. from centralized control,simplified algorithms,commoditiz- The idea of "programmable networks"has been proposed as ing network hardware,eliminating middleboxes,to enabling a way to facilitate network evolution.In particular,Software the design and deployment of third-party 'apps'. Manuscript received June 14,2013;revised October 28,2013. While OpenFlow has received considerable attention from B.A.A.Nunes,X.Nguyen and T.Turletti are with INRIA,France(e-mail: industry,it is worth noting that the idea of programmable [bruno.astuto-arouche-nunes.xuan-nam.nguyen,thierry.turletti)@inria.fr) networks and decoupled control logic has been around for M.Mendonca and K.Obraczka are with UC Santa Cruz (e-mail:[msm. katia@soe.ucsc.edu) many years.In this section,we provide an overview of early Digital Object Identifier 10.1109/SURV.2014.012214.00180 programmable networking efforts,precursors to the current 1553-877XW14/$31.00©2014IEEE
IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 16, NO. 3, THIRD QUARTER 2014 1617 A Survey of Software-Defined Networking: Past, Present, and Future of Programmable Networks Bruno Astuto A. Nunes, Marc Mendonca, Xuan-Nam Nguyen, Katia Obraczka, and Thierry Turletti Abstract—The idea of programmable networks has recently re-gained considerable momentum due to the emergence of the Software-Defined Networking (SDN) paradigm. SDN, often referred to as a “radical new idea in networking”, promises to dramatically simplify network management and enable innovation through network programmability. This paper surveys the state-of-the-art in programmable networks with an emphasis on SDN. We provide a historic perspective of programmable networks from early ideas to recent developments. Then we present the SDN architecture and the OpenFlow standard in particular, discuss current alternatives for implementation and testing of SDN-based protocols and services, examine current and future SDN applications, and explore promising research directions based on the SDN paradigm. Index Terms—Software-Defined Networking, programmable networks, survey, data plane, control plane, virtualization. I. INTRODUCTION COMPUTER networks are typically built from a large number of network devices such as routers, switches and numerous types of middleboxes (i.e., devices that manipulate traffic for purposes other than packet forwarding, such as a firewall) with many complex protocols implemented on them. Network operators are responsible for configuring policies to respond to a wide range of network events and applications. They have to manually transform these high level-policies into low-level configuration commands while adapting to changing network conditions. Often, they also need to accomplish these very complex tasks with access to very limited tools. As a result, network management and performance tuning is quite challenging and thus error-prone. The fact that network devices are usually vertically-integrated black boxes exacerbates the challenge network operators and administrators face. Another almost unsurmountable challenge network practitioners and researchers face has been referred to as “Internet ossification”. Because of its huge deployment base and the fact it is considered part of our society’s critical infrastructure (just like transportation and power grids), the Internet has become extremely difficult to evolve both in terms of its physical infrastructure as well as its protocols and performance. However, as current and emerging Internet applications and services become increasingly more complex and demanding, it is imperative that the Internet be able to evolve to address these new challenges. The idea of “programmable networks” has been proposed as a way to facilitate network evolution. In particular, Software Manuscript received June 14, 2013; revised October 28, 2013. B. A. A. Nunes, X. Nguyen and T. Turletti are with INRIA, France (e-mail: {bruno.astuto-arouche-nunes, xuan-nam.nguyen, thierry.turletti}@inria.fr) M. Mendonca and K. Obraczka are with UC Santa Cruz (e-mail: {msm, katia}@soe.ucsc.edu) Digital Object Identifier 10.1109/SURV.2014.012214.00180 Defined Networking (SDN) is a new networking paradigm in which the forwarding hardware is decoupled from control decisions. It promises to dramatically simplify network management and enable innovation and evolution. The main idea is to allow software developers to rely on network resources in the same easy manner as they do on storage and computing resources. In SDN, the network intelligence is logically centralized in software-based controllers (the control plane), and network devices become simple packet forwarding devices (the data plane) that can be programmed via an open interface (e.g., ForCES [1], OpenFlow [2], etc). SDN is currently attracting significant attention from both academia and industry. A group of network operators, service providers, and vendors have recently created the Open Network Foundation [3], an industrial-driven organization, to promote SDN and standardize the OpenFlow protocol [2]. On the academic side, the OpenFlow Network Research Center [4] has been created with a focus on SDN research. There have also been standardization efforts on SDN at the IETF and IRTF and other standards producing organizations. The field of software defined networking is quite recent, yet growing at a very fast pace. Still, there are important research challenges to be addressed. In this paper, we survey the state-of-the-art in programmable networks by providing a historic perspective of the field and also describing in detail the SDN paradigm and architecture. The paper is organized as follows: in Section II, it begins by describing early efforts focusing on programmable networks. Section III provides an overview of SDN and its architecture. It also describes the OpenFlow protocol. Section IV describes existing platforms for developing and testing SDN solutions including emulation and simulation tools, SDN controller implementations, as well as verification and debugging tools. In Section V, we discuss several SDN applications in areas such as data centers and wireless networking. Finally, Section VI discusses research challenges and future directions. II. EARLY PROGRAMMABLE NETWORKS SDN has great potential to change the way networks operate, and OpenFlow in particular has been touted as a “radical new idea in networking” [5]. The proposed benefits range from centralized control, simplified algorithms, commoditizing network hardware, eliminating middleboxes, to enabling the design and deployment of third-party ‘apps’. While OpenFlow has received considerable attention from industry, it is worth noting that the idea of programmable networks and decoupled control logic has been around for many years. In this section, we provide an overview of early programmable networking efforts, precursors to the current 1553-877X/14/$31.00 c 2014 IEEE
1618 IEEE COMMUNICATIONS SURVEYS TUTORIALS.VOL.16.NO.3.THIRD QUARTER 2014 SDN paradigm that laid the foundation for many of the ideas separation between the routing decision logic and the pro- we are seeing today. tocols governing the interaction between network elements a)Open Signaling:The Open Signaling (OPENSIG) It proposed giving the "decision"plane a global view of the working group began in 1995 with a series of workshops network,serviced by a“dissemination'”'and“discovery”plane, dedicated to "making ATM,Internet and mobile networks for control of a"data"plane for forwarding traffic.These ideas more open,extensible,and programmable"[6].They believed provided direct inspiration for later works such as NOX [17]. that a separation between the communication hardware and which proposed an "operating system for networks"in the control software was necessary but challenging to realize;this context of an OpenFlow-enabled network. is mainly due to vertically integrated switches and routers, e)NETCONF:In 2006,the IETF Network Configu- whose closed nature made the rapid deployment of new ration Working Group proposed NETCONF [18]as a man- network services and environments impossible.The core of agement protocol for modifying the configuration of network their proposal was to provide access to the network hardware devices.The protocol allowed network devices to expose an via open,programmable network interfaces;this would allow API through which extensible configuration data could be sent the deployment of new services through a distributed program- and retrieved. ming environment. Another management protocol,widely deployed in the past Motivated by these ideas,an IETF working group was and used until today,is the SNMP [19].SNMP was proposed created,which led to the specification of the General Switch in the late 80's and proved to be a very popular network Management Protocol (GSMP)[7],a general purpose pro- management protocol,which uses the Structured Management tocol to control a label switch.GSMP allows a controller Interface (SMI)to fetch data contained in the Management to establish and release connections across the switch,add Information Base (MIB).It could be used as well to change and delete leaves on a multicast connection,manage switch variables in the MIB in order to modify configuration settings. ports,request configuration information,request and delete It later became apparent that in spite of what it was originally reservation of switch resources,and request statistics.The intended for,SNMP was not being used to configure network working group is officially concluded and the latest standards equipment,but rather as a performance and fault monitoring proposal,GSMPv3,was published in June 2002. tool.Moreover,multiple shortcomings were detected in the b)Active Networking:Also in the mid 1990s,the conception of SNMP,the most notable of which was its lack Active Networking [8],[9]initiative proposed the idea of of strong security.This was addressed in a later version of the a network infrastructure that would be programmable for protocol. customized services.There were two main approaches being NETCONF,at the time it was proposed by IETF,was considered,namely:(1)user-programmable switches,with in- seen by many as a new approach for network management band data transfer and out-of-band management channels; that would fix the aforementioned shortcomings in SNMP. and (2)capsules,which were program fragments that could Although the NETCONF protocol accomplishes the goal of be carried in user messages;program fragments would then simplifying device (re)configuration and acts as a building be interpreted and executed by routers.Despite considerable block for management,there is no separation between data activity it motivated.Active Networking never gathered crit-and control planes.The same can be stated about SNMP. ical mass and transferred to widespread use and industry A network with NETCONF should not be regarded as fully deployment,mainly due to practical security and performance programmable as any new functionality would have to be concerns [10]. implemented at both the network device and the manager so c)DCAN:Another initiative that took place in the that any new functionality can be provided;furthermore,it is mid 1990s is the Devolved Control of ATM Networks designed primarily to aid automated configuration and not for (DCAN)[11].The aim of this project was to design and enabling direct control of state nor enabling quick deployment develop the necessary infrastructure for scalable control and of innovative services and applications.Nevertheless,both management of ATM networks.The premise is that con- NETCONF and SNMP are useful management tools that trol and management functions of the many devices (ATM may be used in parallel on hybrid switches supporting other switches in the case of DCAN)should be decoupled from the solutions that enable programmable networking devices themselves and delegated to external entities dedicated The NETCONF working group is currently active and the to that purpose,which is basically the concept behind SDNs. latest proposed standard was published in June 2011. DCAN assumes a minimalist protocol between the manager f)Ethane:The immediate predecessor to OpenFlow was and the network,in the lines of what happens today in the SANE Ethane project [20],which,in 2006,defined proposals such as OpenFlow.More on the DCAN project can a new architecture for enterprise networks.Ethane's focus be found at [12]. was on using a centralized controller to manage policy and Still in the lines of SDNs and the proposed decoupling of security in a network.A notable example is providing identity- control and data plane over ATM networks,amongst others, based access control.Similar to SDN,Ethane employed two in the work proposed in [13]multiple heterogeneous control components:a controller to decide if a packet should be architectures are allowed to run simultaneously over single forwarded,and an Ethane switch consisting of a flow table physical ATM network by partitioning the resources of that and a secure channel to the controller. switch between those controllers. Ethane laid the foundation for what would become d)4D Project:Starting in 2004,the 4D project [14], Software-Defined Networking.To put Ethane in the context of [15],[16]advocated a clean slate design that emphasized today's SDN paradigm,Ethane's identity-based access control
1618 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 16, NO. 3, THIRD QUARTER 2014 SDN paradigm that laid the foundation for many of the ideas we are seeing today. a) Open Signaling: The Open Signaling (OPENSIG) working group began in 1995 with a series of workshops dedicated to “making ATM, Internet and mobile networks more open, extensible, and programmable” [6]. They believed that a separation between the communication hardware and control software was necessary but challenging to realize; this is mainly due to vertically integrated switches and routers, whose closed nature made the rapid deployment of new network services and environments impossible. The core of their proposal was to provide access to the network hardware via open, programmable network interfaces; this would allow the deployment of new services through a distributed programming environment. Motivated by these ideas, an IETF working group was created, which led to the specification of the General Switch Management Protocol (GSMP) [7], a general purpose protocol to control a label switch. GSMP allows a controller to establish and release connections across the switch, add and delete leaves on a multicast connection, manage switch ports, request configuration information, request and delete reservation of switch resources, and request statistics. The working group is officially concluded and the latest standards proposal, GSMPv3, was published in June 2002. b) Active Networking: Also in the mid 1990s, the Active Networking [8], [9] initiative proposed the idea of a network infrastructure that would be programmable for customized services. There were two main approaches being considered, namely: (1) user-programmable switches, with inband data transfer and out-of-band management channels; and (2) capsules, which were program fragments that could be carried in user messages; program fragments would then be interpreted and executed by routers. Despite considerable activity it motivated, Active Networking never gathered critical mass and transferred to widespread use and industry deployment, mainly due to practical security and performance concerns [10]. c) DCAN: Another initiative that took place in the mid 1990s is the Devolved Control of ATM Networks (DCAN) [11]. The aim of this project was to design and develop the necessary infrastructure for scalable control and management of ATM networks. The premise is that control and management functions of the many devices (ATM switches in the case of DCAN) should be decoupled from the devices themselves and delegated to external entities dedicated to that purpose, which is basically the concept behind SDNs. DCAN assumes a minimalist protocol between the manager and the network, in the lines of what happens today in proposals such as OpenFlow. More on the DCAN project can be found at [12]. Still in the lines of SDNs and the proposed decoupling of control and data plane over ATM networks, amongst others, in the work proposed in [13] multiple heterogeneous control architectures are allowed to run simultaneously over single physical ATM network by partitioning the resources of that switch between those controllers. d) 4D Project: Starting in 2004, the 4D project [14], [15], [16] advocated a clean slate design that emphasized separation between the routing decision logic and the protocols governing the interaction between network elements. It proposed giving the “decision” plane a global view of the network, serviced by a “dissemination” and “discovery” plane, for control of a “data” plane for forwarding traffic. These ideas provided direct inspiration for later works such as NOX [17], which proposed an “operating system for networks” in the context of an OpenFlow-enabled network. e) NETCONF: In 2006, the IETF Network Configuration Working Group proposed NETCONF [18] as a management protocol for modifying the configuration of network devices. The protocol allowed network devices to expose an API through which extensible configuration data could be sent and retrieved. Another management protocol, widely deployed in the past and used until today, is the SNMP [19]. SNMP was proposed in the late 80’s and proved to be a very popular network management protocol, which uses the Structured Management Interface (SMI) to fetch data contained in the Management Information Base (MIB). It could be used as well to change variables in the MIB in order to modify configuration settings. It later became apparent that in spite of what it was originally intended for, SNMP was not being used to configure network equipment, but rather as a performance and fault monitoring tool. Moreover, multiple shortcomings were detected in the conception of SNMP, the most notable of which was its lack of strong security. This was addressed in a later version of the protocol. NETCONF, at the time it was proposed by IETF, was seen by many as a new approach for network management that would fix the aforementioned shortcomings in SNMP. Although the NETCONF protocol accomplishes the goal of simplifying device (re)configuration and acts as a building block for management, there is no separation between data and control planes. The same can be stated about SNMP. A network with NETCONF should not be regarded as fully programmable as any new functionality would have to be implemented at both the network device and the manager so that any new functionality can be provided; furthermore, it is designed primarily to aid automated configuration and not for enabling direct control of state nor enabling quick deployment of innovative services and applications. Nevertheless, both NETCONF and SNMP are useful management tools that may be used in parallel on hybrid switches supporting other solutions that enable programmable networking. The NETCONF working group is currently active and the latest proposed standard was published in June 2011. f) Ethane: The immediate predecessor to OpenFlow was the SANE / Ethane project [20], which, in 2006, defined a new architecture for enterprise networks. Ethane’s focus was on using a centralized controller to manage policy and security in a network. A notable example is providing identitybased access control. Similar to SDN, Ethane employed two components: a controller to decide if a packet should be forwarded, and an Ethane switch consisting of a flow table and a secure channel to the controller. Ethane laid the foundation for what would become Software-Defined Networking. To put Ethane in the context of today’s SDN paradigm, Ethane’s identity-based access control
NUNES et al:A SURVEY OF SOFTWARE-DEFINED NETWORKING:PAST.PRESENT.AND FUTURE OF PROGRAMMABLE NETWORKS 1619 would likely be implemented as an application on top of an 1)ForCES:The approach proposed by the IETF ForCES SDN controller such as NOX [17],Maestro [211.Beacon [22]. (Forwarding and Control Element Separation)Working Group, SNAC [23],Helios [24],etc. redefines the network device's internal architecture having the control element separated from the forwarding element. However,the network device is still represented as a single III.SOFTWARE-DEFINED NETWORKING entity.The driving use case provided by the working group ARCHITECTURE considers the desire to combine new forwarding hardware with third-party control within a single network device.Thus,the Data communication networks typically consist of end- control and data planes are kept within close proximity (e.g., user devices,or hosts interconnected by the network infras- same box or room).In contrast,the control plane is ripped tructure.This infrastructure is shared by hosts and employs entirely from the network device in "OpenFlow-like"SDN switching elements such as routers and switches as well as systems. communication links to carry data between hosts.Routers ForCES defines two logic entities called the Forwarding and switches are usually "closed"systems,often with limited- Element(FE)and the Control Element(CE),both of which and vendor-specific control interfaces.Therefore,once de- implement the ForCES protocol to communicate.The FE ployed and in production,it is quite difficult for current is responsible for using the underlying hardware to provide network infrastructure to evolve;in other words,deploying per-packet handling.The CE executes control and signaling new versions of existing protocols(e.g.,IPv6),not to mention functions and employs the ForCES protocol to instruct FEs on deploying completely new protocols and services is an almost how to handle packets.The protocol works based on a master- insurmountable obstacle in current networks.The Internet, slave model,where FEs are slaves and CEs are masters. being a network of networks,is no exception. An important building block of the ForCES architecture is As mentioned previously,the so-called Internet "ossifica- the LFB (Logical Function Block).The LFB is a well-defined tion"[2]is largely attributed to the tight coupling between functional block residing on the FEs that is controlled by CEs the data-and control planes which means that decisions about via the ForCES protocol.The LFB enables the CEs to control data flowing through the network are made on-board each the FEs'configuration and how FEs process packets. network element.In this type of environment,the deployment ForCES has been undergoing standardization since 2003 of new network applications or functionality is decidedly non- and the working group has published a variety of documents trivial,as they would need to be implemented directly into including:an applicability statement,an architectural frame- the infrastructure.Even straightforward tasks such as config- work defining the entities and their interactions,a modeling uration or policy enforcement may require a good amount language defining the logical functions within a forwarding of effort due to the lack of a common control interface to element,and the protocol for communication between the the various network devices.Alternatively,workarounds such control and forwarding elements within a network element. as using "middleboxes"(e.g.,firewalls,Intrusion Detection The working group is currently active. Systems,Network Address Translators,etc.)overlayed atop 2)OpenFlow:Driven by the SDN principle of decoupling the underlying network infrastructure have been proposed and the control and data forwarding planes,OpenFlow [2],like deployed as a way to circumvent the network ossification ForCES,standardizes information exchange between the two effect.Content Delivery Networks (CDNs)[25]are a good planes. example. In the OpenFlow architecture,illustrated in Figure 2,the Software-Defined Networking was developed to facilitate forwarding device,or OpenFlow switch,contains one or more innovation and enable simple programmatic control of the flow tables and an abstraction layer that securely communi- network data-path.As visualized in Figure 1,the separation of cates with a controller via OpenFlow protocol.Flow tables the forwarding hardware from the control logic allows easier consist of flow entries,each of which determines how packets deployment of new protocols and applications,straightforward belonging to a flow will be processed and forwarded.Flow network visualization and management,and consolidation of entries typically consist of:(1)match fields,or matching various middleboxes into software control.Instead of enforc- rules,used to match incoming packets;match fields may ing policies and running protocols on a convolution of scat- contain information found in the packet header,ingress port, tered devices,the network is reduced to "simple"forwarding and metadata;(2)counters,used to collect statistics for the hardware and the decision-making network controller(s). particular flow,such as number of received packets,number of bytes and duration of the flow;and (3)a set of instructions, or actions,to be applied upon a match;they dictate how to A.Current SDN Architectures handle matching packets. Upon a packet arrival at an OpenFlow switch,packet header In this section,we review two well-known SDN architec- fields are extracted and matched against the matching fields tures,namely ForCES [1]and Openflow [21.Both OpenFlow portion of the flow table entries.If a matching entry is and ForCES follow the basic SDN principle of separation found,the switch applies the appropriate set of instructions, between the control and data planes;and both standardize or actions,associated with the matched flow entry.If the flow information exchange between planes.However,they are table look-up procedure does not result on a match,the action technically very different in terms of design,architecture, taken by the switch will depend on the instructions defined forwarding model,and protocol interface. by the table-miss flow entry.Every flow table must contain a
NUNES et al.: A SURVEY OF SOFTWARE-DEFINED NETWORKING: PAST, PRESENT, AND FUTURE OF PROGRAMMABLE NETWORKS 1619 would likely be implemented as an application on top of an SDN controller such as NOX [17], Maestro [21], Beacon [22], SNAC [23], Helios [24], etc. III. SOFTWARE-DEFINED NETWORKING ARCHITECTURE Data communication networks typically consist of enduser devices, or hosts interconnected by the network infrastructure. This infrastructure is shared by hosts and employs switching elements such as routers and switches as well as communication links to carry data between hosts. Routers and switches are usually “closed” systems, often with limitedand vendor-specific control interfaces. Therefore, once deployed and in production, it is quite difficult for current network infrastructure to evolve; in other words, deploying new versions of existing protocols (e.g., IPv6), not to mention deploying completely new protocols and services is an almost insurmountable obstacle in current networks. The Internet, being a network of networks, is no exception. As mentioned previously, the so-called Internet “ossification” [2] is largely attributed to the tight coupling between the data– and control planes which means that decisions about data flowing through the network are made on-board each network element. In this type of environment, the deployment of new network applications or functionality is decidedly nontrivial, as they would need to be implemented directly into the infrastructure. Even straightforward tasks such as configuration or policy enforcement may require a good amount of effort due to the lack of a common control interface to the various network devices. Alternatively, workarounds such as using “middleboxes” (e.g., firewalls, Intrusion Detection Systems, Network Address Translators, etc.) overlayed atop the underlying network infrastructure have been proposed and deployed as a way to circumvent the network ossification effect. Content Delivery Networks (CDNs) [25] are a good example. Software-Defined Networking was developed to facilitate innovation and enable simple programmatic control of the network data-path. As visualized in Figure 1, the separation of the forwarding hardware from the control logic allows easier deployment of new protocols and applications, straightforward network visualization and management, and consolidation of various middleboxes into software control. Instead of enforcing policies and running protocols on a convolution of scattered devices, the network is reduced to “simple” forwarding hardware and the decision-making network controller(s). A. Current SDN Architectures In this section, we review two well-known SDN architectures, namely ForCES [1] and Openflow [2]. Both OpenFlow and ForCES follow the basic SDN principle of separation between the control and data planes; and both standardize information exchange between planes. However, they are technically very different in terms of design, architecture, forwarding model, and protocol interface. 1) ForCES: The approach proposed by the IETF ForCES (Forwarding and Control Element Separation) Working Group, redefines the network device’s internal architecture having the control element separated from the forwarding element. However, the network device is still represented as a single entity. The driving use case provided by the working group considers the desire to combine new forwarding hardware with third-party control within a single network device. Thus, the control and data planes are kept within close proximity (e.g., same box or room). In contrast, the control plane is ripped entirely from the network device in “OpenFlow-like” SDN systems. ForCES defines two logic entities called the Forwarding Element (FE) and the Control Element (CE), both of which implement the ForCES protocol to communicate. The FE is responsible for using the underlying hardware to provide per-packet handling. The CE executes control and signaling functions and employs the ForCES protocol to instruct FEs on how to handle packets. The protocol works based on a masterslave model, where FEs are slaves and CEs are masters. An important building block of the ForCES architecture is the LFB (Logical Function Block). The LFB is a well-defined functional block residing on the FEs that is controlled by CEs via the ForCES protocol. The LFB enables the CEs to control the FEs’ configuration and how FEs process packets. ForCES has been undergoing standardization since 2003, and the working group has published a variety of documents including: an applicability statement, an architectural framework defining the entities and their interactions, a modeling language defining the logical functions within a forwarding element, and the protocol for communication between the control and forwarding elements within a network element. The working group is currently active. 2) OpenFlow: Driven by the SDN principle of decoupling the control and data forwarding planes, OpenFlow [2], like ForCES, standardizes information exchange between the two planes. In the OpenFlow architecture, illustrated in Figure 2, the forwarding device, or OpenFlow switch, contains one or more flow tables and an abstraction layer that securely communicates with a controller via OpenFlow protocol. Flow tables consist of flow entries, each of which determines how packets belonging to a flow will be processed and forwarded. Flow entries typically consist of: (1) match fields, or matching rules, used to match incoming packets; match fields may contain information found in the packet header, ingress port, and metadata; (2) counters, used to collect statistics for the particular flow, such as number of received packets, number of bytes and duration of the flow; and (3) a set of instructions, or actions, to be applied upon a match; they dictate how to handle matching packets. Upon a packet arrival at an OpenFlow switch, packet header fields are extracted and matched against the matching fields portion of the flow table entries. If a matching entry is found, the switch applies the appropriate set of instructions, or actions, associated with the matched flow entry. If the flow table look-up procedure does not result on a match, the action taken by the switch will depend on the instructions defined by the table-miss flow entry. Every flow table must contain a
1620 IEEE COMMUNICATIONS SURVEYS TUTORIALS.VOL.16.NO.3.THIRD QUARTER 2014 SDN Controller Middlebox (e.g.Firewall) Forwarding device with Forwarding device with decoupled control embedded control Software Control Traditional Network Software-Defined Network (with distributed control and middleboxes) (with decoupled control) Fig.1.The SDN architecture decouples control logic from the forwarding hardware,and enables the consolidation of middleboxes,simpler policy management and new functionalities.The solid lines define the data-plane links and the dashed lines the control-plane links. CONTROLLER OpenFlow Protocol OPENFLOW CLIENT OPENFLOW FLOW TABLE SWITCH RULE ACTIONS STATISTICS PORT PORT PORT 1 2 N Forward to port(s) IP src/dst,MAC src/dst, Forward to the controller Transport Src/Dst,VLAN Modify header fields Packets,Bytes,Duration Drop Fig.2.Communication between the controller and the forwarding devices happens via OpenFlow protocol.The flow tables are composed by matching rules, actions to be taken when the flow matches the rules,and counters for collecting flow statistics. table-miss entry in order to handle table misses.This particular can be exchanged between these entities over a secure channel. entry specifies a set of actions to be performed when no Using the OpenFlow protocol a remote controller can,for match is found for an incoming packet,such as dropping the example,add,update,or delete flow entries from the switch's packet,continue the matching process on the next flow table, flow tables.That can happen reactively (in response to a packet or forward the packet to the controller over the OpenFlow arrival)or proactively. channel.It is worth noting that from version 1.1 OpenFlow 3)Discussion:In [26].the similarities and differences supports multiple tables and pipeline processing.Another between ForCES and OpenFlow are discussed.Among the possibility,in the case of hybrid switches,i.e.,switches that differences,they highlight the fact that the forwarding model have both OpenFlow-and non-OpenFlow ports,is to forward used by ForCES relies on the Logical Function Blocks(LFBs), non-matching packets using regular IP forwarding schemes.while OpenFlow uses flow tables.They point out that in The communication between controller and switch happens OpenFlow actions associated with a flow can be combined via OpenFlow protocol,which defines a set of messages that to provide greater control and flexibility for the purposes
1620 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 16, NO. 3, THIRD QUARTER 2014 Fig. 1. The SDN architecture decouples control logic from the forwarding hardware, and enables the consolidation of middleboxes, simpler policy management, and new functionalities. The solid lines define the data-plane links and the dashed lines the control-plane links. Forward to port(s) Forward to the controller Modify header fields Drop Packet/byte counter, flow duration IP src/dst , MAC src/dst, Transport Src/Dst, VLAN ... RULE ACTIONS STATISTICS CONTROLLER OpenFlow Protocol PORT 1 PORT 2 PORT N OPENFLOW CLIENT OPENFLOW SWITCH Packets, Bytes, Duration FLOW TABLE Fig. 2. Communication between the controller and the forwarding devices happens via OpenFlow protocol. The flow tables are composed by matching rules, actions to be taken when the flow matches the rules, and counters for collecting flow statistics. table-miss entry in order to handle table misses. This particular entry specifies a set of actions to be performed when no match is found for an incoming packet, such as dropping the packet, continue the matching process on the next flow table, or forward the packet to the controller over the OpenFlow channel. It is worth noting that from version 1.1 OpenFlow supports multiple tables and pipeline processing. Another possibility, in the case of hybrid switches, i.e., switches that have both OpenFlow– and non-OpenFlow ports, is to forward non-matching packets using regular IP forwarding schemes. The communication between controller and switch happens via OpenFlow protocol, which defines a set of messages that can be exchanged between these entities over a secure channel. Using the OpenFlow protocol a remote controller can, for example, add, update, or delete flow entries from the switch’s flow tables. That can happen reactively (in response to a packet arrival) or proactively. 3) Discussion: In [26], the similarities and differences between ForCES and OpenFlow are discussed. Among the differences, they highlight the fact that the forwarding model used by ForCES relies on the Logical Function Blocks (LFBs), while OpenFlow uses flow tables. They point out that in OpenFlow actions associated with a flow can be combined to provide greater control and flexibility for the purposes
NUNES et al:A SURVEY OF SOFTWARE-DEFINED NETWORKING:PAST.PRESENT.AND FUTURE OF PROGRAMMABLE NETWORKS 1621 of network management,administration.and development.In to improve look-up performance of OpenFlow switching in ForCES the combination of different LFBs can also be used Linux was proposed.Preliminary results reported showed a to achieve the same goal. packet switching throughput increase of up to 25%com- We should also re-iterate that ForCES does not follow the pared to the throughput of regular software-based OpenFlow same SDN model underpinning OpenFlow,but can be used switching.Another study on data-plane performance over to achieve the same goals and implement similar functional- Linux based Openflow switching was presented in [31],which ity [26]. compared OpenFlow switching,layer-2 Ethernet switching The strong support from industry,research,and academia and layer-3 IP routing performance.Fairness,forwarding that the Open Networking Foundation (ONF)and its SDN throughput and packet latency in diverse load conditions were proposal,OpenFlow,has been able to gather is quite impres- analyzed.In [32],a basic model for the forwarding speed sive.The resulting critical mass from these different sectors and blocking probability of an OpenFlow switch was derived, has produced a significant number of deliverables in the form while the parameters for the model were drawn from mea- of research papers,reference software implementations,and surements of switching times of current OpenFlow hardware, even hardware.So much so that some argue that OpenFlow's combined with an OpenFlow controller. SDN architecture is the current SDN de-facto standard.In line with this trend,the remainder of this section focuses on 2)Installing Forwarding Rules:Another issue regarding OpenFlow's SDN model.More specifically,we will describe the scalability of an OpenFlow network is memory limitation the different components of the SDN architecture,namely: in forwarding devices.OpenFlow rules are more complex the switch,the controller,and the interfaces present on the than forwarding rules in traditional IP routers.They support controller for communication with forwarding devices (south- more flexible matchings and matching fields and also differ- bound communication)and network applications(northbound ent actions to be taken upon packet arrival.A commodity communication).Section IV also has an OpenFlow focus as it switch normally supports between a few thousand up to tens describes existing platforms for SDN development and testing, of thousands forwarding rules [33].Also,Ternary Content- including emulation and simulation tools,SDN controller im- Addressable Memory (TCAM)has been used to support plementations,as well as verification and debugging tools.Our forwarding rules,which can be expensive and power-hungry. discussion of future SDN applications and research directions Therefore,the rule space is a bottleneck to the scalability of is more general and is SDN architecture agnostic. OpenFlow,and the optimal use of the rule space to serve a scaling number of flow entries while respecting network B.Forwarding Devices policies and constraints is a challenging and important topic. The underlaying network infrastructure may involve a num- Some proposals address memory limitations in OpenFlow ber of different physical network equipment,or forwarding switches.Devoflow [34]is an extension to OpenFlow for high- devices such as routers,switches,virtual switches,wireless performance networks.It handles mice flows (i.e.short flows) access points,to name a few.In a software-defined network. at the OpenFlow switch and only invokes the controller in such devices are often represented as basic forwarding hard- order to handle elephant flows(i.e larger flows).The perfor- ware accessible via an open interface at an abstraction layer,as mance evaluation conducted in [34]showed that Devoflow the control logic and algorithms are off-loaded to a controller. uses 10 to 53 times less flow table space.In DIFANE [35] Such forwarding devices are commonly referred to,in SDN "ingress"switches redirect packets to"authority"switches that terminology,simply as "switches",as illustrated in Figure 3. store all the forwarding rules while ingress switches cache In an OpenFlow network.switches come in two vari- flow table rules for future use.The controller is responsible eties:pure and hybrid.Pure OpenFlow switches have no for partitioning rules over authority switches. legacy features or on-board control,and completely rely on a controller for forwarding decisions.Hybrid switches support Palette [36]and One Big Switch [37]address the rule OpenFlow in addition to traditional operation and protocols. placement problem.Their goal is to minimize the number Most commercial switches available today are hybrids. of rules that need to be installed in forwarding devices and 1)Processing Forwarding Rules:Flow-based SDN archi- use end-to-end policies and routing policies as input to a rule tectures such as OpenFlow may utilize additional forwarding placement optimizer.End-to-end policies consist of a set of table entries,buffer space,and statistical counters that are prioritized rules dictating,for example,access control and difficult to implement in traditional ASIC switches.Some load balancing,while viewing the whole network as a single recent proposals [27],[28]have advocated adding a general- virtual switch.Routing policies,on the other hand,dictate purpose CPU,either on-switch or nearby,that may be used through what paths traffic should flow in the network.The to supplement or take over certain functions and reduce the main idea in Palette is to partition end-to-end policies into complexity of the ASIC design.This would have the added sub tables and then distribute them over the switches.Their benefit of allowing greater flexibility for on-switch processing algorithm consists of two steps:determine the number k of as some aspects would be software-defined. tables needed and then partition the rules set over k tables In [29],network processor based acceleration cards were One Big Switch.on the other hand,solves the rule placement used to perform OpenFlow switching.They proposed and problem separately for each path.choosing the paths based on described the design options and reported results that showed a network metrics (e.g.latency,congestion and bandwidth),and 20%reduction on packet delay.In [30],an architectural design then combining the result to reach a global solution
NUNES et al.: A SURVEY OF SOFTWARE-DEFINED NETWORKING: PAST, PRESENT, AND FUTURE OF PROGRAMMABLE NETWORKS 1621 of network management, administration, and development. In ForCES the combination of different LFBs can also be used to achieve the same goal. We should also re-iterate that ForCES does not follow the same SDN model underpinning OpenFlow, but can be used to achieve the same goals and implement similar functionality [26]. The strong support from industry, research, and academia that the Open Networking Foundation (ONF) and its SDN proposal, OpenFlow, has been able to gather is quite impressive. The resulting critical mass from these different sectors has produced a significant number of deliverables in the form of research papers, reference software implementations, and even hardware. So much so that some argue that OpenFlow’s SDN architecture is the current SDN de-facto standard. In line with this trend, the remainder of this section focuses on OpenFlow’s SDN model. More specifically, we will describe the different components of the SDN architecture, namely: the switch, the controller, and the interfaces present on the controller for communication with forwarding devices (southbound communication) and network applications (northbound communication). Section IV also has an OpenFlow focus as it describes existing platforms for SDN development and testing, including emulation and simulation tools, SDN controller implementations, as well as verification and debugging tools. Our discussion of future SDN applications and research directions is more general and is SDN architecture agnostic. B. Forwarding Devices The underlaying network infrastructure may involve a number of different physical network equipment, or forwarding devices such as routers, switches, virtual switches, wireless access points, to name a few. In a software-defined network, such devices are often represented as basic forwarding hardware accessible via an open interface at an abstraction layer, as the control logic and algorithms are off-loaded to a controller. Such forwarding devices are commonly referred to, in SDN terminology, simply as “switches”, as illustrated in Figure 3. In an OpenFlow network, switches come in two varieties: pure and hybrid. Pure OpenFlow switches have no legacy features or on-board control, and completely rely on a controller for forwarding decisions. Hybrid switches support OpenFlow in addition to traditional operation and protocols. Most commercial switches available today are hybrids. 1) Processing Forwarding Rules: Flow-based SDN architectures such as OpenFlow may utilize additional forwarding table entries, buffer space, and statistical counters that are difficult to implement in traditional ASIC switches. Some recent proposals [27], [28] have advocated adding a generalpurpose CPU, either on-switch or nearby, that may be used to supplement or take over certain functions and reduce the complexity of the ASIC design. This would have the added benefit of allowing greater flexibility for on-switch processing as some aspects would be software-defined. In [29], network processor based acceleration cards were used to perform OpenFlow switching. They proposed and described the design options and reported results that showed a 20% reduction on packet delay. In [30], an architectural design to improve look-up performance of OpenFlow switching in Linux was proposed. Preliminary results reported showed a packet switching throughput increase of up to 25% compared to the throughput of regular software-based OpenFlow switching. Another study on data-plane performance over Linux based Openflow switching was presented in [31], which compared OpenFlow switching, layer-2 Ethernet switching and layer-3 IP routing performance. Fairness, forwarding throughput and packet latency in diverse load conditions were analyzed. In [32], a basic model for the forwarding speed and blocking probability of an OpenFlow switch was derived, while the parameters for the model were drawn from measurements of switching times of current OpenFlow hardware, combined with an OpenFlow controller. 2) Installing Forwarding Rules: Another issue regarding the scalability of an OpenFlow network is memory limitation in forwarding devices. OpenFlow rules are more complex than forwarding rules in traditional IP routers. They support more flexible matchings and matching fields and also different actions to be taken upon packet arrival. A commodity switch normally supports between a few thousand up to tens of thousands forwarding rules [33]. Also, Ternary ContentAddressable Memory (TCAM) has been used to support forwarding rules, which can be expensive and power-hungry. Therefore, the rule space is a bottleneck to the scalability of OpenFlow, and the optimal use of the rule space to serve a scaling number of flow entries while respecting network policies and constraints is a challenging and important topic. Some proposals address memory limitations in OpenFlow switches. Devoflow [34] is an extension to OpenFlow for highperformance networks. It handles mice flows (i.e. short flows) at the OpenFlow switch and only invokes the controller in order to handle elephant flows (i.e larger flows). The performance evaluation conducted in [34] showed that Devoflow uses 10 to 53 times less flow table space. In DIFANE [35], “ingress” switches redirect packets to “authority” switches that store all the forwarding rules while ingress switches cache flow table rules for future use. The controller is responsible for partitioning rules over authority switches. Palette [36] and One Big Switch [37] address the rule placement problem. Their goal is to minimize the number of rules that need to be installed in forwarding devices and use end-to-end policies and routing policies as input to a rule placement optimizer. End-to-end policies consist of a set of prioritized rules dictating, for example, access control and load balancing, while viewing the whole network as a single virtual switch. Routing policies, on the other hand, dictate through what paths traffic should flow in the network. The main idea in Palette is to partition end-to-end policies into sub tables and then distribute them over the switches. Their algorithm consists of two steps: determine the number k of tables needed and then partition the rules set over k tables. One Big Switch, on the other hand, solves the rule placement problem separately for each path, choosing the paths based on network metrics (e.g. latency, congestion and bandwidth), and then combining the result to reach a global solution
1622 IEEE COMMUNICATIONS SURVEYS TUTORIALS.VOL.16.NO.3.THIRD QUARTER 2014 scalability of the network.In the following,we address some Applications such scalability concerns and go over some proposals that aim on overcoming these challenges.We leave a more detailed Network OS discussion on the application layer and the implementation of services and policy enforcement to Section VI-C. Decoupled 1)Control Scalability:An initial concern that arises when Control Logic offloading control from the switching hardware is the scalabil- ity and performance of the network controller(s).The original Secure Ethane [20]controller,hosted on a commodity desktop ma- 可 Channel chine,was tested to handle up to 11,000 new flow requests per second and responded within 1.5 milliseconds.A more recent Abstraction Layer study [39]of several OpenFlow controller implementations (NOX-MT,Maestro,Beacon),conducted on a larger emulated network with 100,000 endpoints and up to 256 switches,found that all were able to handle at least 50,000 new flow requests Flow Table per second in each of the tested scenarios.On an eight- core machine,the multi-threaded NOX-MT implementation handled 1.6 million new flow requests per second with an SWITCH average response time of 2 milliseconds.As the results show, a single controller is able to handle a surprising number of new flow requests,and should be able to manage all but the largest Fig.3.The separated control logic can be viewed as a network operating networks.Furthermore,new controllers under development system,upon which applications can be built to "program"the network. such as McNettle [40]target powerful multicore servers and are being designed to scale up to large data center workloads C.The Controller (around 20 million flows requests per second and up to 5000 The decoupled system has been compared to an operating switches).Nonetheless,multiple controllers may be used to system [17],in which the controller provides a programmatic reduce latency or increase fault tolerance. interface to the network.That can be used to implement A related concern is the controller placement problem [41]. management tasks and offer new functionalities.A layered which attempts to determine both the optimal number of view of this model is illustrated in Figure 3.This abstraction controllers and their location within the network topology, assumes the control is centralized and applications are written often choosing between optimizing for average and worst as if the network is a single system.It enables the SDN case latency.The latency of the link used for communication model to be applied over a wide range of applications and between controller and switch is of great importance when heterogeneous network technologies and physical media such dimensioning a network or evaluating its performance [34]. as wireless (e.g.802.11 and 802.16),wired (e.g.Ethernet)and That was one of the main motivations behind the work in [42] optical networks. which evaluated how the controller and the network perform As a practical example of the layering abstraction accessi-with bandwidth and latency issues on the control link.This ble through open application programming interfaces(APIs),work concludes that bandwidth in the control link arbitrates Figure 4 illustrates the architecture of an SDN controller how many flows can be processed by the controller,as well based on the OpenFlow protocol.This specific controller is as the loss rate when under saturation conditions.The switch- a fork of the Beacon controller [22]called Floodlight [38]. to-control latency on the other hand,has a major impact on In this figure it is possible to observe the separation between the overall behavior of the network,as each switch cannot the controller and the application layers.Applications can be forward data until it receives the message from the controller written in Java and can interact with the built-in controller that inserts the appropriate rules in the flow table.This interval modules via a JAVA APl.Other applications can be written in can grow with the link latency and impact dramatically the different languages and interact with the controller modules performance of network applications. via the REST API.This particular example of an SDN Also,control modeling greatly impacts the network scal- controller allows the implementation of built-in modules that ability.Some important scalability issues are presented can communicate with their implementation of the OpenFlow in [43],along with a discussion about scalability trade-offs controller (i.e.OpenFlow Services).The controller,on the in software-defined network design. other hand,can communicate with the forwarding devices via 2)Control models:In the following,we go over some of the OpenFlow protocol through the abstraction layer present these SDN design options and discuss different methods of at the forwarding hardware,illustrated in Figure 3. controlling a software-defined network,many of which are While the aforementioned layering abstractions accessible interrelated: via open APIs allow the simplification of policy enforce- Centralized vs.Distributed ment and management tasks,the bindings must be closely Although protocols such as OpenFlow specify that a maintained between the control and the network forwarding switch is controlled by a controller and therefore ap- elements.The choices made while implementing such layering pears to imply centralization,software-defined networks architectures can dramatically influence the performance and may have either a centralized or distributed control-
1622 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 16, NO. 3, THIRD QUARTER 2014 Network OS Applications Secure Channel Decoupled Control Logic SWITCH Flow Table Abstraction Layer Fig. 3. The separated control logic can be viewed as a network operating system, upon which applications can be built to “program” the network. C. The Controller The decoupled system has been compared to an operating system [17], in which the controller provides a programmatic interface to the network. That can be used to implement management tasks and offer new functionalities. A layered view of this model is illustrated in Figure 3. This abstraction assumes the control is centralized and applications are written as if the network is a single system. It enables the SDN model to be applied over a wide range of applications and heterogeneous network technologies and physical media such as wireless (e.g. 802.11 and 802.16), wired (e.g. Ethernet) and optical networks. As a practical example of the layering abstraction accessible through open application programming interfaces (APIs), Figure 4 illustrates the architecture of an SDN controller based on the OpenFlow protocol. This specific controller is a fork of the Beacon controller [22] called Floodlight [38]. In this figure it is possible to observe the separation between the controller and the application layers. Applications can be written in Java and can interact with the built-in controller modules via a JAVA API. Other applications can be written in different languages and interact with the controller modules via the REST API. This particular example of an SDN controller allows the implementation of built-in modules that can communicate with their implementation of the OpenFlow controller (i.e. OpenFlow Services). The controller, on the other hand, can communicate with the forwarding devices via the OpenFlow protocol through the abstraction layer present at the forwarding hardware, illustrated in Figure 3. While the aforementioned layering abstractions accessible via open APIs allow the simplification of policy enforcement and management tasks, the bindings must be closely maintained between the control and the network forwarding elements. The choices made while implementing such layering architectures can dramatically influence the performance and scalability of the network. In the following, we address some such scalability concerns and go over some proposals that aim on overcoming these challenges. We leave a more detailed discussion on the application layer and the implementation of services and policy enforcement to Section VI-C. 1) Control Scalability: An initial concern that arises when offloading control from the switching hardware is the scalability and performance of the network controller(s). The original Ethane [20] controller, hosted on a commodity desktop machine, was tested to handle up to 11,000 new flow requests per second and responded within 1.5 milliseconds. A more recent study [39] of several OpenFlow controller implementations (NOX-MT, Maestro, Beacon), conducted on a larger emulated network with 100,000 endpoints and up to 256 switches, found that all were able to handle at least 50,000 new flow requests per second in each of the tested scenarios. On an eightcore machine, the multi-threaded NOX-MT implementation handled 1.6 million new flow requests per second with an average response time of 2 milliseconds. As the results show, a single controller is able to handle a surprising number of new flow requests, and should be able to manage all but the largest networks. Furthermore, new controllers under development such as McNettle [40] target powerful multicore servers and are being designed to scale up to large data center workloads (around 20 million flows requests per second and up to 5000 switches). Nonetheless, multiple controllers may be used to reduce latency or increase fault tolerance. A related concern is the controller placement problem [41], which attempts to determine both the optimal number of controllers and their location within the network topology, often choosing between optimizing for average and worst case latency. The latency of the link used for communication between controller and switch is of great importance when dimensioning a network or evaluating its performance [34]. That was one of the main motivations behind the work in [42] which evaluated how the controller and the network perform with bandwidth and latency issues on the control link. This work concludes that bandwidth in the control link arbitrates how many flows can be processed by the controller, as well as the loss rate when under saturation conditions. The switchto-control latency on the other hand, has a major impact on the overall behavior of the network, as each switch cannot forward data until it receives the message from the controller that inserts the appropriate rules in the flow table. This interval can grow with the link latency and impact dramatically the performance of network applications. Also, control modeling greatly impacts the network scalability. Some important scalability issues are presented in [43], along with a discussion about scalability trade-offs in software-defined network design. 2) Control models: In the following, we go over some of these SDN design options and discuss different methods of controlling a software-defined network, many of which are interrelated: • Centralized vs. Distributed Although protocols such as OpenFlow specify that a switch is controlled by a controller and therefore appears to imply centralization, software-defined networks may have either a centralized or distributed control-
NUNES et al:A SURVEY OF SOFTWARE-DEFINED NETWORKING:PAST.PRESENT.AND FUTURE OF PROGRAMMABLE NETWORKS 1623 Leaming PortDown OpenStack Switch Reconciliation Quantum Plugin Firewall VNF Hub Circuit Pusher JAVA API REST API Module Thread Packet Jython Unit Manager Web Ul Pool Streamer Server Tests Device Topology Link Flow Manager Manager/ Discovery Cache Storage Memory Routing Controller Counter Switches PerfMon Trace Memory Store OpenFlow Services Fig.4.The Floodlight architecture as an example of an OpenFlow controller. plane.Though controller-to-controller communication is sor [48].can be used to add a level of network virtualiza- not defined by OpenFlow,it is necessary for any type of tion to OpenFlow networks and allow multiple controllers distribution or redundancy in the control-plane. to simultaneously control overlapping sets of physical A physically centralized controller represents a single switches.Initially developed to allow experimental re- point of failure for the entire network;therefore,Open- search to be conducted on deployed networks alongside Flow allows the connection of multiple controllers to a production traffic,it also facilitates and demonstrates the switch,which would allow backup controllers to take ease of deploying new services in SDN environments. over in the event of a failure. A logically decentralized control plane would be needed Onix [44]and HyperFlow [45]take the idea further in an inter-network spanning multiple administrative do- by attempting to maintain a logically centralized but mains.Though the domains may not agree to centralized physically distributed control plane.This decreases the control,a certain level of sharing may be appropriate look-up overhead by enabling communication with local (e.g.,to ensure service level agreements are met for traffic controllers,while still allowing applications to be written flowing between domains). with a simplified central view of the network.The po- 。Control Granularity tential downside are trade-offs [46]related to consistency Traditionally,the basic unit of networking has been and staleness when distributing state throughout the con- the packet.Each packet contains address information trol plane,which has the potential to cause applications necessary for a network switch to make routing decisions. that believe they have an accurate view of the network However,most applications send data as a flow of many to act incorrectly. individual packets.A network that wishes to provide A hybrid approach,such as Kandoo [47],can utilize local QoS or service guarantees to certain applications may controllers for local applications and redirect to a global benefit from individual flow-based control.Control can controller for decisions that require centralized network be further abstracted to an aggregated flow-match,rather state.This reduces the load on the global controller by than individual flows.Flow aggregation may be based filtering the number of new flow requests,while also on source,destination,application,or any combination providing the data-path with faster responses for requests thereof. that can be handled by a local control application. In a software-defined network where network elements A software-defined network can also have some level of are controlled remotely,overhead is caused by traffic logical decentralization,with multiple logical controllers. between the data-plane and control-plane.As such,using An interesting type of proxy controller,called Flowvi- packet level granularity would incur additional delay as
NUNES et al.: A SURVEY OF SOFTWARE-DEFINED NETWORKING: PAST, PRESENT, AND FUTURE OF PROGRAMMABLE NETWORKS 1623 Module Manager Thread Pool Packet Streamer Jython Server Web UI Unit Tests Device Manager Topology Manager/ Routing Link Discovery Flow Cache Storage Memory Switches Controller Memory PerfMon Trace Counter Store Firewall Hub Learning Switch PortDown Reconciliation VNF Circuit Pusher OpenStack Quantum Plugin OpenFlow Services JAVA API Fig. 4. The Floodlight architecture as an example of an OpenFlow controller. plane. Though controller-to-controller communication is not defined by OpenFlow, it is necessary for any type of distribution or redundancy in the control-plane. A physically centralized controller represents a single point of failure for the entire network; therefore, OpenFlow allows the connection of multiple controllers to a switch, which would allow backup controllers to take over in the event of a failure. Onix [44] and HyperFlow [45] take the idea further by attempting to maintain a logically centralized but physically distributed control plane. This decreases the look-up overhead by enabling communication with local controllers, while still allowing applications to be written with a simplified central view of the network. The potential downside are trade-offs [46] related to consistency and staleness when distributing state throughout the control plane, which has the potential to cause applications that believe they have an accurate view of the network to act incorrectly. A hybrid approach, such as Kandoo [47], can utilize local controllers for local applications and redirect to a global controller for decisions that require centralized network state. This reduces the load on the global controller by filtering the number of new flow requests, while also providing the data-path with faster responses for requests that can be handled by a local control application. A software-defined network can also have some level of logical decentralization, with multiple logical controllers. An interesting type of proxy controller, called Flowvisor [48], can be used to add a level of network virtualization to OpenFlow networks and allow multiple controllers to simultaneously control overlapping sets of physical switches. Initially developed to allow experimental research to be conducted on deployed networks alongside production traffic, it also facilitates and demonstrates the ease of deploying new services in SDN environments. A logically decentralized control plane would be needed in an inter-network spanning multiple administrative domains. Though the domains may not agree to centralized control, a certain level of sharing may be appropriate (e.g., to ensure service level agreements are met for traffic flowing between domains). • Control Granularity Traditionally, the basic unit of networking has been the packet. Each packet contains address information necessary for a network switch to make routing decisions. However, most applications send data as a flow of many individual packets. A network that wishes to provide QoS or service guarantees to certain applications may benefit from individual flow-based control. Control can be further abstracted to an aggregated flow-match, rather than individual flows. Flow aggregation may be based on source, destination, application, or any combination thereof. In a software-defined network where network elements are controlled remotely, overhead is caused by traffic between the data-plane and control-plane. As such, using packet level granularity would incur additional delay as
1624 IEEE COMMUNICATIONS SURVEYS TUTORIALS.VOL.16.NO.3.THIRD QUARTER 2014 the controller would have to make a decision for each arriving packet.When controlling individual flows,the HIGH-LEVEL NETWORK decision made for the first packet of the flow can be ap- SERVICE(S)/APPLICATION(S) plied to all subsequent packets of that flow.The overhead may be further reduced by grouping flows together,such NORTHBOUND COMMUNICATION as all traffic between two hosts,and performing control decisions on the aggregated flows. SERVICE CONTROLLER INTERFACE Reactive vs.Proactive Policies NETWORK CONTROLLER Under a reactive control model,such as the one proposed by Ethane [20].forwarding elements must consult a OTHER SERVICE TOPOLOGY controller each time a decision must be made,such as ESSENTIAL MANAGER MANAGER when a packet from a new flow reaches a switch.In FUNCTIONS the case of flow-based control granularity,there will be NTROLLER SWITCH INTERFACE a small performance delay as the first packet of each new flow is forwarded to the controller for decision SOUTHBOUND COMMUNICATION (e.g.,forward or drop),after which future packets within (E.G.OPENFLOW) that flow will travel at line rate within the forwarding CONTROLLER SWITCH INTERFACE hardware.While the delay incurred by the first-packet may be negligible in many cases,it may be a concern if PACKET FORWARDING DEVICE(S) the controller is geographically remote (though this can be mitigated by physically distributing the controller [45]) or if most flows are short-lived,such as single-packet Fig.5.A controller with a northbound and southbound interface. flows.There are also some scalability issues in larger networks,as the controller must be able to handle a larger volume of new flow requests. controller.For security,OpenFlow 1.3.0 provides optional Alternatively,proactive control approaches push policy support for encrypted Transport Layer Security (TLS)com- rules from the controller to the switches.A good example munication and a certificate exchange between the switches of proactive control is DIFANE [35].which partitions and the controller(s):however,the exact implementation and rules over a hierarchy of switches,such that the controller certificate format is not currently specified.Also outside the rarely needs to be consulted about new flows and traffic is scope of the current specification are fine-grained security kept within the data-plane.In their experiments,DIFANE options regarding scenarios with multiple controllers,as there reduces first-packet delay from a 10ms average round-trip is no method specified to only grant partial access permissions time(RTT)with a centralized NOX controller to a 0.4ms to an authorized controller.We examine OpenFlow controller average RTT for new single-packet flows.It was also implementation options in greater detail in Section IV. shown to increase the new flow throughput,as the tested version of NOX achieved a peak of 50,000 single-packet E.Northbound Communication:Controller-Service flows per second while the DIFANE solution achieved External management systems or network services may 800,000 single-packet flows per second.Interestingly,it wish to extract information about the underlying network or was observed that the OpenFlow switch's local controller control an aspect of network behavior or policy.Additionally, implementation becomes a bottleneck before the central controllers may find it necessary to communicate with each NOX controller.This was attributed to the fact that com- other for a variety of reasons.For example,an internal control mercial OpenFlow switch implementations were limited application may need to reserve resources across multiple to sending 60-330 new flows requests per second at the domains of control or a "primary"controller may need to time of their publication (2010). share policy information with a backup,etc. As shown in Figure 5,a controller that acts as a network Unlike controller-switch communication,there is no cur- operating system must implement at least two interfaces:a rently accepted standard for northbound interactions and they "southbound"interface that allows switches to communicate are more likely to be implemented on an ad hoc basis for with the controller and a "northbound"interface that presents particular applications.We discuss this further in Section VI. an API to network control and high-level applications/services. F.Standardization Efforts D.Southbound Communication:Controller-Switch Recently,several standardization organizations have been An important aspect of SDNs is the link between the turning the spotlights towards SDN.For example,as previ- data-plane and the control-plane.As forwarding elements are ously mentioned,the IETF's Forwarding and Control Element controlled by an open interface,it is important that this link Separation (ForCES)Working Group [1]has been working remains available and secure. on standardizing mechanisms,interfaces,and protocols aim- The OpenFlow protocol can be viewed as one possible im-ing at the centralization of network control and abstraction plementation of controller-switch interactions,as it defines the of network infrastructure.The Open Network Foundation communication between the switching hardware and a network (ONF)[3]has been trying to standardize the OpenFlow
1624 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 16, NO. 3, THIRD QUARTER 2014 the controller would have to make a decision for each arriving packet. When controlling individual flows, the decision made for the first packet of the flow can be applied to all subsequent packets of that flow. The overhead may be further reduced by grouping flows together, such as all traffic between two hosts, and performing control decisions on the aggregated flows. • Reactive vs. Proactive Policies Under a reactive control model, such as the one proposed by Ethane [20], forwarding elements must consult a controller each time a decision must be made, such as when a packet from a new flow reaches a switch. In the case of flow-based control granularity, there will be a small performance delay as the first packet of each new flow is forwarded to the controller for decision (e.g., forward or drop), after which future packets within that flow will travel at line rate within the forwarding hardware. While the delay incurred by the first-packet may be negligible in many cases, it may be a concern if the controller is geographically remote (though this can be mitigated by physically distributing the controller [45]) or if most flows are short-lived, such as single-packet flows. There are also some scalability issues in larger networks, as the controller must be able to handle a larger volume of new flow requests. Alternatively, proactive control approaches push policy rules from the controller to the switches. A good example of proactive control is DIFANE [35], which partitions rules over a hierarchy of switches, such that the controller rarely needs to be consulted about new flows and traffic is kept within the data-plane. In their experiments, DIFANE reduces first-packet delay from a 10ms average round-trip time (RTT) with a centralized NOX controller to a 0.4ms average RTT for new single-packet flows. It was also shown to increase the new flow throughput, as the tested version of NOX achieved a peak of 50,000 single-packet flows per second while the DIFANE solution achieved 800,000 single-packet flows per second. Interestingly, it was observed that the OpenFlow switch’s local controller implementation becomes a bottleneck before the central NOX controller. This was attributed to the fact that commercial OpenFlow switch implementations were limited to sending 60-330 new flows requests per second at the time of their publication (2010). As shown in Figure 5, a controller that acts as a network operating system must implement at least two interfaces: a “southbound” interface that allows switches to communicate with the controller and a “northbound” interface that presents an API to network control and high-level applications/services. D. Southbound Communication: Controller-Switch An important aspect of SDNs is the link between the data-plane and the control-plane. As forwarding elements are controlled by an open interface, it is important that this link remains available and secure. The OpenFlow protocol can be viewed as one possible implementation of controller-switch interactions, as it defines the communication between the switching hardware and a network Fig. 5. A controller with a northbound and southbound interface. controller. For security, OpenFlow 1.3.0 provides optional support for encrypted Transport Layer Security (TLS) communication and a certificate exchange between the switches and the controller(s); however, the exact implementation and certificate format is not currently specified. Also outside the scope of the current specification are fine-grained security options regarding scenarios with multiple controllers, as there is no method specified to only grant partial access permissions to an authorized controller. We examine OpenFlow controller implementation options in greater detail in Section IV. E. Northbound Communication: Controller-Service External management systems or network services may wish to extract information about the underlying network or control an aspect of network behavior or policy. Additionally, controllers may find it necessary to communicate with each other for a variety of reasons. For example, an internal control application may need to reserve resources across multiple domains of control or a “primary” controller may need to share policy information with a backup, etc. Unlike controller-switch communication, there is no currently accepted standard for northbound interactions and they are more likely to be implemented on an ad hoc basis for particular applications. We discuss this further in Section VI. F. Standardization Efforts Recently, several standardization organizations have been turning the spotlights towards SDN. For example, as previously mentioned, the IETF’s Forwarding and Control Element Separation (ForCES) Working Group [1] has been working on standardizing mechanisms, interfaces, and protocols aiming at the centralization of network control and abstraction of network infrastructure. The Open Network Foundation (ONF) [3] has been trying to standardize the OpenFlow
NUNES et al:A SURVEY OF SOFTWARE-DEFINED NETWORKING:PAST.PRESENT.AND FUTURE OF PROGRAMMABLE NETWORKS 1625 TABLE I CURRENT SOFTWARE SWITCH IMPLEMENTATIONS COMPLIANT WITH THE OPENFLOW STANDARD. Software Switch Implementation Overview Version Open vSwitch [55] C/Python Open source software switch that aims to implement a switch platform w1.0 in virtualized serve Supports standard managem interfaces and enables programmatic extension and control of the forwarding functions.Can be ported into ASIC switches. Pantou/OpenWRT [56] Tums a commercial wireless router or Access Point into an OpenFlow-enabled switch. vLO C7C++ OpenFlow 1.3 compatible user-space software switch implementation. v.3 Indigo [58] Open source OpenFlow implementation that runs on physical switches and uses vI0 the hardware features of Ethernet switch ASICs to run OpenFlow. protocol.As the control plane abstracts network applications TABLE II from underlying hardware infrastructure,they focus on stan- MAIN CURRENT AVAILABLE COMMODITY SWITCHES BY MAKERS, COMPLIANT WITH THE OPENFLOW STANDARD. dardizing the interfaces between:(1)network applications and the controller (i.e.northbound interface)and(2)the controller Maker Switch Model I Version and the switching infrastructure (i.e.,southbound interface) Hewlett-Packard 8200zL.6600.6200zL, vI.0 which defines the OpenFlow protocol itself.Some of the Study 5400zL.and3500/3500yl rocade Netlron CES 2000 Series v1.0 Groups (SGs)of ITU's Telecommunication Standardization IBM RackSwitch G8264 v1.0 Sector (ITU-T)[49]are currently working towards discussing P5240PF5820 v1.0 onto 3290and3780 v1.0 requirements and creating recommendations for SDNs under uniber Junos MX-Series vLO different perspectives.For instance,the SG13 focuses on P53290.P-295,P-3780andP39201 v1.2 Future Networks,including cloud computing,mobile and next generation networks,and is establishing requirements for network virtualization.Other ITU-T SGs such as the SG11 for protocols and test specifications started,in early 2013, protocols can first be developed and tested on an emulation requirements and architecture discussions on SDN signaling. of the anticipated deployment environment before moving to The Software-Defined Networking Research Group (SDNRG) the actual hardware.By default Mininet supports OpenFlow at IRTF [50]is also focusing on SDN under various perspec- v1.0,though it may be modified to support a software switch tives with the goal of identifying new approaches that can be that implements a newer release. defined and deployed,as well as identifying future research The ns-3 [54]network simulator supports OpenFlow challenges.Some of their main areas of interest include switches within its environment,though the current version solution scalability,abstractions,security and programming only implements OpenFlow v0.89. languages and paradigms particularly useful in the context of SDN. B.Available Software Switch Platforms These and other working groups perform important work, There are currently several SDN software switches available coordinating efforts to evolve existing standards and proposing that can be used,for example,to run an SDN testbed or when new ones.The goal is to facilitate smooth transitions from developing services over SDN.Table I presents a list of current legacy networking technology to the new protocols and archi- software switch implementations with a brief description in- tectures,such as SDN Some of these groups,such as ITU-T's cluding implementation language and the OpenFlow standard SG13,advocate the establishment of a Joint Coordination Ac- version that the current implementation supports. tivity on SDN (JCA-SDN)for collaboration and coordination between standardizing efforts and also taking advantage of the C.Native SDN Switches work performed by the Open Source Software (OSS)commu- One of the main SDN enabling technologies currently being nity,such as OpenStack [51]and OpenDayLight [52]as they implemented in commodity networking hardware is the Open- start developing the building blocks for SDN implementation. Flow standard.In this section we do not intend to present a detailed overview of OpenFlow enabled hardware and makers, IV.SDN DEVELOPMENT TOOLS but rather provide a list of native SDN switches currently SDN has been proposed to facilitate network evolution and available in the market and provide some information about innovation by allowing rapid deployment of new services and them,including the version of OpenFlow they implement. protocols.In this section,we provide an overview of currently One clear evidence of industry's strong commitment to SDN available tools and environments for developing SDN-based is the availability of commodity network hardware that are services and protocols. OpenFlow enabled.Table II lists commercial switches that are currently available,their manufacturer,and the version of OpenFlow they implement. A.Emulation and Simulation Tools Mininet [53]allows an entire OpenFlow network to be D.Available Controller Platforms emulated on a single machine,simplifying the initial develop- Table III shows a snapshot of current controller implemen- ment and deployment process.New services,applications and tations.To date,all the controllers in the table support the
NUNES et al.: A SURVEY OF SOFTWARE-DEFINED NETWORKING: PAST, PRESENT, AND FUTURE OF PROGRAMMABLE NETWORKS 1625 TABLE I CURRENT SOFTWARE SWITCH IMPLEMENTATIONS COMPLIANT WITH THE OPENFLOW STANDARD. Software Switch Implementation Overview Version Open vSwitch [55] C/Python Open source software switch that aims to implement a switch platform v1.0 in virtualized server environments. Supports standard management interfaces and enables programmatic extension and control of the forwarding functions. Can be ported into ASIC switches. Pantou/OpenWRT [56] C Turns a commercial wireless router or Access Point into an OpenFlow-enabled switch. v1.0 ofsoftswitch13 [57] C/C++ OpenFlow 1.3 compatible user-space software switch implementation. v1.3 Indigo [58] C Open source OpenFlow implementation that runs on physical switches and uses v1.0 the hardware features of Ethernet switch ASICs to run OpenFlow. protocol. As the control plane abstracts network applications from underlying hardware infrastructure, they focus on standardizing the interfaces between: (1) network applications and the controller (i.e. northbound interface) and (2) the controller and the switching infrastructure (i.e., southbound interface) which defines the OpenFlow protocol itself. Some of the Study Groups (SGs) of ITU’s Telecommunication Standardization Sector (ITU-T) [49] are currently working towards discussing requirements and creating recommendations for SDNs under different perspectives. For instance, the SG13 focuses on Future Networks, including cloud computing, mobile and next generation networks, and is establishing requirements for network virtualization. Other ITU-T SGs such as the SG11 for protocols and test specifications started, in early 2013, requirements and architecture discussions on SDN signaling. The Software-Defined Networking Research Group (SDNRG) at IRTF [50] is also focusing on SDN under various perspectives with the goal of identifying new approaches that can be defined and deployed, as well as identifying future research challenges. Some of their main areas of interest include solution scalability, abstractions, security and programming languages and paradigms particularly useful in the context of SDN. These and other working groups perform important work, coordinating efforts to evolve existing standards and proposing new ones. The goal is to facilitate smooth transitions from legacy networking technology to the new protocols and architectures, such as SDN Some of these groups, such as ITU-T’s SG13, advocate the establishment of a Joint Coordination Activity on SDN (JCA-SDN) for collaboration and coordination between standardizing efforts and also taking advantage of the work performed by the Open Source Software (OSS) community, such as OpenStack [51] and OpenDayLight [52] as they start developing the building blocks for SDN implementation. IV. SDN DEVELOPMENT TOOLS SDN has been proposed to facilitate network evolution and innovation by allowing rapid deployment of new services and protocols. In this section, we provide an overview of currently available tools and environments for developing SDN-based services and protocols. A. Emulation and Simulation Tools Mininet [53] allows an entire OpenFlow network to be emulated on a single machine, simplifying the initial development and deployment process. New services, applications and TABLE II MAIN CURRENT AVAILABLE COMMODITY SWITCHES BY MAKERS, COMPLIANT WITH THE OPENFLOW STANDARD. Maker Switch Model Version Hewlett-Packard 8200zl, 6600, 6200zl, v1.0 5400zl, and 3500/3500yl Brocade NetIron CES 2000 Series v1.0 IBM RackSwitch G8264 v1.0 NEC PF5240 PF5820 v1.0 Pronto 3290 and 3780 v1.0 Juniper Junos MX-Series v1.0 Pica8 P-3290, P-3295, P-3780 and P-3920 v1.2 protocols can first be developed and tested on an emulation of the anticipated deployment environment before moving to the actual hardware. By default Mininet supports OpenFlow v1.0, though it may be modified to support a software switch that implements a newer release. The ns-3 [54] network simulator supports OpenFlow switches within its environment, though the current version only implements OpenFlow v0.89. B. Available Software Switch Platforms There are currently several SDN software switches available that can be used, for example, to run an SDN testbed or when developing services over SDN. Table I presents a list of current software switch implementations with a brief description including implementation language and the OpenFlow standard version that the current implementation supports. C. Native SDN Switches One of the main SDN enabling technologies currently being implemented in commodity networking hardware is the OpenFlow standard. In this section we do not intend to present a detailed overview of OpenFlow enabled hardware and makers, but rather provide a list of native SDN switches currently available in the market and provide some information about them, including the version of OpenFlow they implement. One clear evidence of industry’s strong commitment to SDN is the availability of commodity network hardware that are OpenFlow enabled. Table II lists commercial switches that are currently available, their manufacturer, and the version of OpenFlow they implement. D. Available Controller Platforms Table III shows a snapshot of current controller implementations. To date, all the controllers in the table support the
1626 IEEE COMMUNICATIONS SURVEYS TUTORIALS.VOL.16.NO.3.THIRD QUARTER 2014 OpenFlow protocol version 1.0,unless stated otherwise.This V.SDN APPLICATIONS table also provides a brief overview of the listed controllers. Software-defined networking has applications in a wide va- Included in Table III are also two special purpose controller riety of networked environments.By decoupling the control- implementations:Flowvisor [48,mentioned previously,and and data planes,programmable networks enable customized RouteFlow [66].The former acts as a transparent proxy be- control,an opportunity to eliminate middleboxes,as well tween OpenFlow switches and multiple OpenFlow controllers. as simplified development and deployment of new network It is able to create network slices and can delegate control of services and protocols.Below,we examine different envi- each slice to a different controller,also promoting isolation ronments for which SDN solutions have been proposed or between slices.RouteFlow,on the other hand,is an open implemented. source project to provide virtualized IP routing over OpenFlow capable hardware.It is composed of an OpenFlow Controller application,an independent server,and a virtual network A.Enterprise Networks environment that reproduces the connectivity of a physical Enterprises often run large networks,while also having infrastructure and runs IP routing engines.The routing engines strict security and performance requirements.Furthermore, generate the forwarding information base(FIB)into the Linux different enterprise environments can have very different re- IP tables according to the routing protocols configured (e.g.. quirements,characteristics,and user population,For example, OSPF,BGP).An extension of RouteFlow is presented in [67]. University networks can be considered a special case of which discusses Routing Control Platforms (RCPs)in the enterprise networks:in such an environment,many of the context of OpenFlow/SDN.They proposed a controller-centric connecting devices are temporary and not controlled by the networking model along with a prototype implementation of University,further challenging security and resource alloca- an autonomous-system-wide abstract BGP routing service. tion.Additionally,Universities must often provide support for research testbeds and experimental protocols. Adequate management is critically important in Enterprise E.Code Verification and Debugging environments,and SDN can be used to programmatically enforce and adjust network policies as well as help monitor Verification and debugging tools are vital resources for network activity and tune network performance. traditional software development and are no less important for SDN.Indeed,for the idea of portable network "apps"to Additionally,SDN can be used to simplify the network by ridding it from middleboxes and integrating their functionality be successful,network behavior must be thoroughly tested and within the network controller.Some notable examples of verified. middlebox functionality that has been implemented using NICE [68]is an automated testing tool used to help uncover SDN include NAT,firewalls,load balancers [74][75],and bugs in OpenFlow programs through model checking and network access control [76.In the case of more complex symbolic execution. middleboxes with functionalities that cannot be directly im- Anteater [69]takes a different approach by attempting to plemented without performance degradation(e.g.,deep packet check network invariants that exist in the data plane,such as inspection),SDN can be used to provide unified control and connectivity or consistency.The main benefit of this approach management77. is that it is protocol-agnostic;it will also catch errors that The work presented in [78]addresses the issues related result from faulty switch firmware or inconsistencies with the to consistent network updates.Configuration changes are control plane communication.VeriFlow [70]has a similar a common source of instability in networks and can lead goal,but goes further by proposing a real-time verification to outages,security flaws,and performance disruptions.In tool that resides between the controller and the forwarding [78],a set of high-level abstractions are proposed that allow elements.This adds the potential benefit of being able to halt network administrators to update the entire network,guaran- bad rules that will cause anomalous behavior before they reach teeing that every packet traversing the network is processed the network. by exactly one consistent global network configuration.To Other efforts proposed debugging tools that provide insights support these abstractions,several OpenFlow-based update gleaned from control plane traffic.OFRewind [71]allows mechanisms were developed. network events (control and data)to be recorded at different As discussed in earlier sections,OpenFlow evolved from granularities and later replayed to reproduce a specific sce- Ethane [20],a network architecture designed specifically to nario,granting the opportunity to localize and troubleshoot the address the issues faced by enterprise networks. events that caused the network anomaly.ndb [72]implements breakpoints and packet-backtraces for SDN.Just as with the popular software debugger gdb,users can pinpoint events that B.Data Centers lead to error by pausing execution at a breakpoint,or,using Data centers have evolved at an amazing pace in recent a packet backtrace,show the sequence of forwarding actions years,constantly attempting to meet increasingly higher and seen by that packet.STS [73]is a software-defined network rapidly changing demand.Careful traffic management and troubleshooting simulator.It is written in python and depends policy enforcement is critical when operating at such large on POX.It simulates the devices in a given network allowing scales,especially when any service disruption or additional for testing cases and identifying the set of inputs that generates delay may lead to massive productivity and/or profit loss.Due a given error. to the challenges of engineering networks of this scale and
1626 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 16, NO. 3, THIRD QUARTER 2014 OpenFlow protocol version 1.0, unless stated otherwise. This table also provides a brief overview of the listed controllers. Included in Table III are also two special purpose controller implementations: Flowvisor [48], mentioned previously, and RouteFlow [66]. The former acts as a transparent proxy between OpenFlow switches and multiple OpenFlow controllers. It is able to create network slices and can delegate control of each slice to a different controller, also promoting isolation between slices. RouteFlow, on the other hand, is an open source project to provide virtualized IP routing over OpenFlow capable hardware. It is composed of an OpenFlow Controller application, an independent server, and a virtual network environment that reproduces the connectivity of a physical infrastructure and runs IP routing engines. The routing engines generate the forwarding information base (FIB) into the Linux IP tables according to the routing protocols configured (e.g., OSPF, BGP). An extension of RouteFlow is presented in [67], which discusses Routing Control Platforms (RCPs) in the context of OpenFlow/SDN. They proposed a controller-centric networking model along with a prototype implementation of an autonomous-system-wide abstract BGP routing service. E. Code Verification and Debugging Verification and debugging tools are vital resources for traditional software development and are no less important for SDN. Indeed, for the idea of portable network “apps” to be successful, network behavior must be thoroughly tested and verified. NICE [68] is an automated testing tool used to help uncover bugs in OpenFlow programs through model checking and symbolic execution. Anteater [69] takes a different approach by attempting to check network invariants that exist in the data plane, such as connectivity or consistency. The main benefit of this approach is that it is protocol-agnostic; it will also catch errors that result from faulty switch firmware or inconsistencies with the control plane communication. VeriFlow [70] has a similar goal, but goes further by proposing a real-time verification tool that resides between the controller and the forwarding elements. This adds the potential benefit of being able to halt bad rules that will cause anomalous behavior before they reach the network. Other efforts proposed debugging tools that provide insights gleaned from control plane traffic. OFRewind [71] allows network events (control and data) to be recorded at different granularities and later replayed to reproduce a specific scenario, granting the opportunity to localize and troubleshoot the events that caused the network anomaly. ndb [72] implements breakpoints and packet-backtraces for SDN. Just as with the popular software debugger gdb, users can pinpoint events that lead to error by pausing execution at a breakpoint, or, using a packet backtrace, show the sequence of forwarding actions seen by that packet. STS [73] is a software-defined network troubleshooting simulator. It is written in python and depends on POX. It simulates the devices in a given network allowing for testing cases and identifying the set of inputs that generates a given error. V. SDN APPLICATIONS Software-defined networking has applications in a wide variety of networked environments. By decoupling the control– and data planes, programmable networks enable customized control, an opportunity to eliminate middleboxes, as well as simplified development and deployment of new network services and protocols. Below, we examine different environments for which SDN solutions have been proposed or implemented. A. Enterprise Networks Enterprises often run large networks, while also having strict security and performance requirements. Furthermore, different enterprise environments can have very different requirements, characteristics, and user population, For example, University networks can be considered a special case of enterprise networks: in such an environment, many of the connecting devices are temporary and not controlled by the University, further challenging security and resource allocation. Additionally, Universities must often provide support for research testbeds and experimental protocols. Adequate management is critically important in Enterprise environments, and SDN can be used to programmatically enforce and adjust network policies as well as help monitor network activity and tune network performance. Additionally, SDN can be used to simplify the network by ridding it from middleboxes and integrating their functionality within the network controller. Some notable examples of middlebox functionality that has been implemented using SDN include NAT, firewalls, load balancers [74] [75], and network access control [76]. In the case of more complex middleboxes with functionalities that cannot be directly implemented without performance degradation (e.g., deep packet inspection), SDN can be used to provide unified control and management[77]. The work presented in [78] addresses the issues related to consistent network updates. Configuration changes are a common source of instability in networks and can lead to outages, security flaws, and performance disruptions. In [78], a set of high-level abstractions are proposed that allow network administrators to update the entire network, guaranteeing that every packet traversing the network is processed by exactly one consistent global network configuration. To support these abstractions, several OpenFlow-based update mechanisms were developed. As discussed in earlier sections, OpenFlow evolved from Ethane [20], a network architecture designed specifically to address the issues faced by enterprise networks. B. Data Centers Data centers have evolved at an amazing pace in recent years, constantly attempting to meet increasingly higher and rapidly changing demand. Careful traffic management and policy enforcement is critical when operating at such large scales, especially when any service disruption or additional delay may lead to massive productivity and/or profit loss. Due to the challenges of engineering networks of this scale and