-
Notifications
You must be signed in to change notification settings - Fork 30
Many users start by playing around with the pyswitch component as is it a simple example written in python only, and it is a good start to get a feeling of the framework. Look at the code, see how it utilizes the python core API (defined in src/nox/lib/core.py) and maybe start experimenting with it using a single Openflow switch as the controlled entity. For a more challenging step, look at the Routing module, which makes use of the Authenticator, Topology and Discovery components, to see how you can start building bigger, multi-component applications.
However, keep in mind that what you just installed is merely a framework for programming network behaviours. The included Components are there to set a foundation and give examples of how NOX can be used. From know on, what happens with NOX and your network is what you want (and program) it to do.
In order to see an example of setting up flows through a NOX component written in python, you can look at /src/nox/coreapps/examples/pyswitch.py The idea is that you need to construct the openflow packet manually, filling out the flow description and the actions, and then send it to the desired switch. Deleting flows works in the same way. The process is the same in a C++ component, just the API differs a bit.
The Components and their relationship are declared to NOX through the meta.json files in their respective directories. Take a look at build/src/nox/coreapps/examples/meta.json. You can see several components here. The <name> tag defines the components name as passed to the command line when invoking NOX. Pay attention to this, as the name can differ from the actual file which implements the controller. The latter is defined within the >python< tags there. For example, in build/src/nox/coreapps/examples/meta.json, the sample_routing component is implemented by the file samplerouting.py. In this case, you would invoke ./nox_core with the parameter sample_routing.
If you simply register an event handler, it is not guaranteed to execute before or after any other event handler for the same event. However, there are cases where the order that event handlers are called is important, so NOX provides a mechanism to specifically define an ordering.
The key to this mechanism is the build/src/etc/nox.json file. For example, if you find Packet_in_event in this file, you'll see that it is a list of component names (other event names work just the same -- we're just using Packet_in_event for an example). By default (as of this writing), you can see that the Spanning_tree component has the first say when a Packet_in_event is raised. Editing the file as follows:
"Packet_in_event": [ "my_application", "spanning_tree", . .
would make my_application intercept all incoming packets that arrive to NOX. my_application can then act upon the packets and decide whether or not it will pass the Event on to subsequent components (starting with Spanning_tree). Whether events are propagated to subsequent components or not depends on the Disposition that their event handlers return.
The log settings are in the form:
<module>:<facility>:<level>
module is the name of a component or other module. Something like "nox" or "pyrt" or "pyswitch", for example. "ANY" is a special string which matches every module.
facility is the logging facility. The ones currently in NOX are "syslog" and "console". Again, "ANY" is a special string which matches all facilities.
level is the minimum log level that you want logged. From most to least significant, these are: EMER, ERR, WARN, INFO, DBG.
You can include multiple of these. For example:
--verbose=ANY:ANY:DBG --verbose=pyswitch:ANY:EMER
.. should turn on debug level logging for everything except pyswitch, which will only log emergency info.
In the destiny branch and beyond, -v (or --verbose) is basically a shortcut for ANY:ANY:INFO. Including it twice (-v -v) is basically a shortcut for ANY:ANY:DBG behavior (this was the normal -v behavior before destiny).
You can also configure the log via the nox.json config file. Basically, there is a "logging" key which contains an array of maps which contain combinations of the keys "module", "facility", and "level". So for example, if you wanted to do the same as the commandline above, your nox.json might look like:
{ "nox" : { "logging" : [ { "level" : "DBG" }, { "module" : "pyswitch", "level" : "EMER" } ] ... more stuff ... } }
What happens internal to NOX depends on the components being run. The general rule is that when a packet reaches a controlled switch which holds no flow-entry for the description of the packet, it will be forwarded to NOX. (Typically, only the packet's first 128 bytes holding the interesting values for decision-making will be pushed to NOX, but sending the whole packet is configurable too). NOX will translate the packet into a Packet_in_event which it will raise. At this point, what happens to the packet depends on the processing chain of the Packet_in_event. The component(s) responsible for handling the event will process the packet and take corresponding actions.
As a simple scenario, lets look at the example components included in the NOX release:
- Use case scenario 1: pyswitch
- Use case scenario 2: routing
While the above examples demonstrate rather simple switch functionalities, one can imagine that far more complicated operations can be performed by the switch, by having it simply follow pre-established forwarding decisions.
Several developers have run different performance tests. The rate of flow modifications the controller can handle differs based on the host PC capabilities, the network size and the switches used. As an indication, the following results are provided by Minlan Yu's test at Princeton:
#switch flows/sec (peak rate) 1 18K 2 29K 3 39K 4 59K 5 45K 6 50K
In terms of scalability, NOX has been used successfully in corporate and campus networks consisting of ? switches for over a year. However, the number of switches a single controller can handle depends strongly on the functionality it imposes on the network, the network traffic, and anything else that has an effect on the number of control operations required by the controller.
There is currently a GUI component in the development branch of the repository which provides visualization of the controlled network. Additionally, some third party apps have been built (for example, LAVI).
Take a look at the Publications page
Please take a look at Dependencies.
Yes, the unit test framework hasn't been fully ported yet. This will not affect your build. However, system testing should work:
~/noxrepo/nox/build/src> sudo make test
- Nope, that fails too.
- You're probable using GCC/G++ 4.4. The short story is "That's okay." If you're interested in the long story, check this.
NOX supports any routing algorithms researchers/developers implement on it.
That said, the inherent Routing application computes all pairs shortest paths on each link status change and uses that per-flow to set up routes on the network. The shortest path algorithm used is from Demetrescu et. al.. For more information about how the included module works, check Routing
If you run
sudo ./nox_core -v dummywebpage
You should see an example webppage. Note that it will use a self signed certification so you have to accept it in your browser.
The current web applications are provided as a ground for providing a handle to NOX and its controller though a web interface. For more information on how to build components that interface with the web services, look at Webservice
TODO
(talk about SWIG and wrappers)
(talk about creating custom app directories and how(if) this ability will be inherent in later releases)
If you still have questions, refer to the doxygen documentation, or check out the nox-dev mailing list and #noxrepo IRC channel.