diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..7f375056 --- /dev/null +++ b/404.html @@ -0,0 +1,1208 @@ + + + + + + + + + + + + + + + + + + CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ +

404 - Not found

+ +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/Makefile.am b/Makefile.am new file mode 100644 index 00000000..7b744912 --- /dev/null +++ b/Makefile.am @@ -0,0 +1 @@ +EXTRA_DIST = $(wildcard *) diff --git a/_config.yml b/_config.yml new file mode 100644 index 00000000..c4192631 --- /dev/null +++ b/_config.yml @@ -0,0 +1 @@ +theme: jekyll-theme-cayman \ No newline at end of file diff --git a/architecture.html b/architecture.html new file mode 100644 index 00000000..0b4d5653 --- /dev/null +++ b/architecture.html @@ -0,0 +1,1443 @@ + + + + + + + + + + + + + + + + + + + + + + Architecture - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

CORE Architecture

+

Main Components

+
    +
  • core-daemon
      +
    • Manages emulated sessions of nodes and links for a given network
    • +
    • Nodes are created using Linux namespaces
    • +
    • Links are created using Linux bridges and virtual ethernet peers
    • +
    • Packets sent over links are manipulated using traffic control
    • +
    • Provides gRPC API
    • +
    +
  • +
  • core-gui
      +
    • GUI and daemon communicate over gRPC API
    • +
    • Drag and drop creation for nodes and links
    • +
    • Can launch terminals for emulated nodes in running sessions
    • +
    • Can save/open scenario files to recreate previous sessions
    • +
    +
  • +
  • vnoded
      +
    • Command line utility for creating CORE node namespaces
    • +
    +
  • +
  • vcmd
      +
    • Command line utility for sending shell commands to nodes
    • +
    +
  • +
+

+

Sessions

+

CORE can create and run multiple emulated sessions at once, below is an +overview of the states a session will transition between during typical +GUI interactions.

+

+

How Does it Work?

+

The CORE framework runs on Linux and uses Linux namespacing for creating +node containers. These nodes are linked together using Linux bridging and +virtual interfaces. CORE sessions are a set of nodes and links operating +together for a specific purpose.

+

Linux

+

Linux network namespaces (also known as netns) is the primary +technique used by CORE. Most recent Linux distributions have +namespaces-enabled kernels out of the box. Each namespace has its own process +environment and private network stack. Network namespaces share the same +filesystem in CORE.

+

CORE combines these namespaces with Linux Ethernet bridging to form networks. +Link characteristics are applied using Linux Netem queuing disciplines. +Nftables provides Ethernet frame filtering on Linux bridges. Wireless networks are +emulated by controlling which interfaces can send and receive with nftables +rules.

+

Open Source Project and Resources

+

CORE has been released by Boeing to the open source community under the BSD +license. If you find CORE useful for your work, please contribute back to the +project. Contributions can be as simple as reporting a bug, dropping a line of +encouragement, or can also include submitting patches or maintaining aspects +of the tool.

+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/assets/images/favicon.png b/assets/images/favicon.png new file mode 100644 index 00000000..1cf13b9f Binary files /dev/null and b/assets/images/favicon.png differ diff --git a/assets/javascripts/bundle.220ee61c.min.js b/assets/javascripts/bundle.220ee61c.min.js new file mode 100644 index 00000000..116072a1 --- /dev/null +++ b/assets/javascripts/bundle.220ee61c.min.js @@ -0,0 +1,29 @@ +"use strict";(()=>{var Ci=Object.create;var gr=Object.defineProperty;var Ri=Object.getOwnPropertyDescriptor;var ki=Object.getOwnPropertyNames,Ht=Object.getOwnPropertySymbols,Hi=Object.getPrototypeOf,yr=Object.prototype.hasOwnProperty,nn=Object.prototype.propertyIsEnumerable;var rn=(e,t,r)=>t in e?gr(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,P=(e,t)=>{for(var r in t||(t={}))yr.call(t,r)&&rn(e,r,t[r]);if(Ht)for(var r of Ht(t))nn.call(t,r)&&rn(e,r,t[r]);return e};var on=(e,t)=>{var r={};for(var n in e)yr.call(e,n)&&t.indexOf(n)<0&&(r[n]=e[n]);if(e!=null&&Ht)for(var n of Ht(e))t.indexOf(n)<0&&nn.call(e,n)&&(r[n]=e[n]);return r};var Pt=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var Pi=(e,t,r,n)=>{if(t&&typeof t=="object"||typeof t=="function")for(let o of ki(t))!yr.call(e,o)&&o!==r&&gr(e,o,{get:()=>t[o],enumerable:!(n=Ri(t,o))||n.enumerable});return e};var yt=(e,t,r)=>(r=e!=null?Ci(Hi(e)):{},Pi(t||!e||!e.__esModule?gr(r,"default",{value:e,enumerable:!0}):r,e));var sn=Pt((xr,an)=>{(function(e,t){typeof xr=="object"&&typeof an!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(xr,function(){"use strict";function e(r){var n=!0,o=!1,i=null,s={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function a(O){return!!(O&&O!==document&&O.nodeName!=="HTML"&&O.nodeName!=="BODY"&&"classList"in O&&"contains"in O.classList)}function f(O){var Qe=O.type,De=O.tagName;return!!(De==="INPUT"&&s[Qe]&&!O.readOnly||De==="TEXTAREA"&&!O.readOnly||O.isContentEditable)}function c(O){O.classList.contains("focus-visible")||(O.classList.add("focus-visible"),O.setAttribute("data-focus-visible-added",""))}function u(O){O.hasAttribute("data-focus-visible-added")&&(O.classList.remove("focus-visible"),O.removeAttribute("data-focus-visible-added"))}function p(O){O.metaKey||O.altKey||O.ctrlKey||(a(r.activeElement)&&c(r.activeElement),n=!0)}function m(O){n=!1}function d(O){a(O.target)&&(n||f(O.target))&&c(O.target)}function h(O){a(O.target)&&(O.target.classList.contains("focus-visible")||O.target.hasAttribute("data-focus-visible-added"))&&(o=!0,window.clearTimeout(i),i=window.setTimeout(function(){o=!1},100),u(O.target))}function v(O){document.visibilityState==="hidden"&&(o&&(n=!0),Y())}function Y(){document.addEventListener("mousemove",N),document.addEventListener("mousedown",N),document.addEventListener("mouseup",N),document.addEventListener("pointermove",N),document.addEventListener("pointerdown",N),document.addEventListener("pointerup",N),document.addEventListener("touchmove",N),document.addEventListener("touchstart",N),document.addEventListener("touchend",N)}function B(){document.removeEventListener("mousemove",N),document.removeEventListener("mousedown",N),document.removeEventListener("mouseup",N),document.removeEventListener("pointermove",N),document.removeEventListener("pointerdown",N),document.removeEventListener("pointerup",N),document.removeEventListener("touchmove",N),document.removeEventListener("touchstart",N),document.removeEventListener("touchend",N)}function N(O){O.target.nodeName&&O.target.nodeName.toLowerCase()==="html"||(n=!1,B())}document.addEventListener("keydown",p,!0),document.addEventListener("mousedown",m,!0),document.addEventListener("pointerdown",m,!0),document.addEventListener("touchstart",m,!0),document.addEventListener("visibilitychange",v,!0),Y(),r.addEventListener("focus",d,!0),r.addEventListener("blur",h,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)})});var cn=Pt(Er=>{(function(e){var t=function(){try{return!!Symbol.iterator}catch(c){return!1}},r=t(),n=function(c){var u={next:function(){var p=c.shift();return{done:p===void 0,value:p}}};return r&&(u[Symbol.iterator]=function(){return u}),u},o=function(c){return encodeURIComponent(c).replace(/%20/g,"+")},i=function(c){return decodeURIComponent(String(c).replace(/\+/g," "))},s=function(){var c=function(p){Object.defineProperty(this,"_entries",{writable:!0,value:{}});var m=typeof p;if(m!=="undefined")if(m==="string")p!==""&&this._fromString(p);else if(p instanceof c){var d=this;p.forEach(function(B,N){d.append(N,B)})}else if(p!==null&&m==="object")if(Object.prototype.toString.call(p)==="[object Array]")for(var h=0;hd[0]?1:0}),c._entries&&(c._entries={});for(var p=0;p1?i(d[1]):"")}})})(typeof global!="undefined"?global:typeof window!="undefined"?window:typeof self!="undefined"?self:Er);(function(e){var t=function(){try{var o=new e.URL("b","http://a");return o.pathname="c d",o.href==="http://a/c%20d"&&o.searchParams}catch(i){return!1}},r=function(){var o=e.URL,i=function(f,c){typeof f!="string"&&(f=String(f)),c&&typeof c!="string"&&(c=String(c));var u=document,p;if(c&&(e.location===void 0||c!==e.location.href)){c=c.toLowerCase(),u=document.implementation.createHTMLDocument(""),p=u.createElement("base"),p.href=c,u.head.appendChild(p);try{if(p.href.indexOf(c)!==0)throw new Error(p.href)}catch(O){throw new Error("URL unable to set base "+c+" due to "+O)}}var m=u.createElement("a");m.href=f,p&&(u.body.appendChild(m),m.href=m.href);var d=u.createElement("input");if(d.type="url",d.value=f,m.protocol===":"||!/:/.test(m.href)||!d.checkValidity()&&!c)throw new TypeError("Invalid URL");Object.defineProperty(this,"_anchorElement",{value:m});var h=new e.URLSearchParams(this.search),v=!0,Y=!0,B=this;["append","delete","set"].forEach(function(O){var Qe=h[O];h[O]=function(){Qe.apply(h,arguments),v&&(Y=!1,B.search=h.toString(),Y=!0)}}),Object.defineProperty(this,"searchParams",{value:h,enumerable:!0});var N=void 0;Object.defineProperty(this,"_updateSearchParams",{enumerable:!1,configurable:!1,writable:!1,value:function(){this.search!==N&&(N=this.search,Y&&(v=!1,this.searchParams._fromString(this.search),v=!0))}})},s=i.prototype,a=function(f){Object.defineProperty(s,f,{get:function(){return this._anchorElement[f]},set:function(c){this._anchorElement[f]=c},enumerable:!0})};["hash","host","hostname","port","protocol"].forEach(function(f){a(f)}),Object.defineProperty(s,"search",{get:function(){return this._anchorElement.search},set:function(f){this._anchorElement.search=f,this._updateSearchParams()},enumerable:!0}),Object.defineProperties(s,{toString:{get:function(){var f=this;return function(){return f.href}}},href:{get:function(){return this._anchorElement.href.replace(/\?$/,"")},set:function(f){this._anchorElement.href=f,this._updateSearchParams()},enumerable:!0},pathname:{get:function(){return this._anchorElement.pathname.replace(/(^\/?)/,"/")},set:function(f){this._anchorElement.pathname=f},enumerable:!0},origin:{get:function(){var f={"http:":80,"https:":443,"ftp:":21}[this._anchorElement.protocol],c=this._anchorElement.port!=f&&this._anchorElement.port!=="";return this._anchorElement.protocol+"//"+this._anchorElement.hostname+(c?":"+this._anchorElement.port:"")},enumerable:!0},password:{get:function(){return""},set:function(f){},enumerable:!0},username:{get:function(){return""},set:function(f){},enumerable:!0}}),i.createObjectURL=function(f){return o.createObjectURL.apply(o,arguments)},i.revokeObjectURL=function(f){return o.revokeObjectURL.apply(o,arguments)},e.URL=i};if(t()||r(),e.location!==void 0&&!("origin"in e.location)){var n=function(){return e.location.protocol+"//"+e.location.hostname+(e.location.port?":"+e.location.port:"")};try{Object.defineProperty(e.location,"origin",{get:n,enumerable:!0})}catch(o){setInterval(function(){e.location.origin=n()},100)}}})(typeof global!="undefined"?global:typeof window!="undefined"?window:typeof self!="undefined"?self:Er)});var qr=Pt((Mt,Nr)=>{/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */(function(t,r){typeof Mt=="object"&&typeof Nr=="object"?Nr.exports=r():typeof define=="function"&&define.amd?define([],r):typeof Mt=="object"?Mt.ClipboardJS=r():t.ClipboardJS=r()})(Mt,function(){return function(){var e={686:function(n,o,i){"use strict";i.d(o,{default:function(){return Ai}});var s=i(279),a=i.n(s),f=i(370),c=i.n(f),u=i(817),p=i.n(u);function m(j){try{return document.execCommand(j)}catch(T){return!1}}var d=function(T){var E=p()(T);return m("cut"),E},h=d;function v(j){var T=document.documentElement.getAttribute("dir")==="rtl",E=document.createElement("textarea");E.style.fontSize="12pt",E.style.border="0",E.style.padding="0",E.style.margin="0",E.style.position="absolute",E.style[T?"right":"left"]="-9999px";var H=window.pageYOffset||document.documentElement.scrollTop;return E.style.top="".concat(H,"px"),E.setAttribute("readonly",""),E.value=j,E}var Y=function(T,E){var H=v(T);E.container.appendChild(H);var I=p()(H);return m("copy"),H.remove(),I},B=function(T){var E=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},H="";return typeof T=="string"?H=Y(T,E):T instanceof HTMLInputElement&&!["text","search","url","tel","password"].includes(T==null?void 0:T.type)?H=Y(T.value,E):(H=p()(T),m("copy")),H},N=B;function O(j){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?O=function(E){return typeof E}:O=function(E){return E&&typeof Symbol=="function"&&E.constructor===Symbol&&E!==Symbol.prototype?"symbol":typeof E},O(j)}var Qe=function(){var T=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},E=T.action,H=E===void 0?"copy":E,I=T.container,q=T.target,Me=T.text;if(H!=="copy"&&H!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(q!==void 0)if(q&&O(q)==="object"&&q.nodeType===1){if(H==="copy"&&q.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if(H==="cut"&&(q.hasAttribute("readonly")||q.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if(Me)return N(Me,{container:I});if(q)return H==="cut"?h(q):N(q,{container:I})},De=Qe;function $e(j){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?$e=function(E){return typeof E}:$e=function(E){return E&&typeof Symbol=="function"&&E.constructor===Symbol&&E!==Symbol.prototype?"symbol":typeof E},$e(j)}function Ei(j,T){if(!(j instanceof T))throw new TypeError("Cannot call a class as a function")}function tn(j,T){for(var E=0;E0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof I.action=="function"?I.action:this.defaultAction,this.target=typeof I.target=="function"?I.target:this.defaultTarget,this.text=typeof I.text=="function"?I.text:this.defaultText,this.container=$e(I.container)==="object"?I.container:document.body}},{key:"listenClick",value:function(I){var q=this;this.listener=c()(I,"click",function(Me){return q.onClick(Me)})}},{key:"onClick",value:function(I){var q=I.delegateTarget||I.currentTarget,Me=this.action(q)||"copy",kt=De({action:Me,container:this.container,target:this.target(q),text:this.text(q)});this.emit(kt?"success":"error",{action:Me,text:kt,trigger:q,clearSelection:function(){q&&q.focus(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function(I){return vr("action",I)}},{key:"defaultTarget",value:function(I){var q=vr("target",I);if(q)return document.querySelector(q)}},{key:"defaultText",value:function(I){return vr("text",I)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function(I){var q=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return N(I,q)}},{key:"cut",value:function(I){return h(I)}},{key:"isSupported",value:function(){var I=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],q=typeof I=="string"?[I]:I,Me=!!document.queryCommandSupported;return q.forEach(function(kt){Me=Me&&!!document.queryCommandSupported(kt)}),Me}}]),E}(a()),Ai=Li},828:function(n){var o=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function s(a,f){for(;a&&a.nodeType!==o;){if(typeof a.matches=="function"&&a.matches(f))return a;a=a.parentNode}}n.exports=s},438:function(n,o,i){var s=i(828);function a(u,p,m,d,h){var v=c.apply(this,arguments);return u.addEventListener(m,v,h),{destroy:function(){u.removeEventListener(m,v,h)}}}function f(u,p,m,d,h){return typeof u.addEventListener=="function"?a.apply(null,arguments):typeof m=="function"?a.bind(null,document).apply(null,arguments):(typeof u=="string"&&(u=document.querySelectorAll(u)),Array.prototype.map.call(u,function(v){return a(v,p,m,d,h)}))}function c(u,p,m,d){return function(h){h.delegateTarget=s(h.target,p),h.delegateTarget&&d.call(u,h)}}n.exports=f},879:function(n,o){o.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},o.nodeList=function(i){var s=Object.prototype.toString.call(i);return i!==void 0&&(s==="[object NodeList]"||s==="[object HTMLCollection]")&&"length"in i&&(i.length===0||o.node(i[0]))},o.string=function(i){return typeof i=="string"||i instanceof String},o.fn=function(i){var s=Object.prototype.toString.call(i);return s==="[object Function]"}},370:function(n,o,i){var s=i(879),a=i(438);function f(m,d,h){if(!m&&!d&&!h)throw new Error("Missing required arguments");if(!s.string(d))throw new TypeError("Second argument must be a String");if(!s.fn(h))throw new TypeError("Third argument must be a Function");if(s.node(m))return c(m,d,h);if(s.nodeList(m))return u(m,d,h);if(s.string(m))return p(m,d,h);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function c(m,d,h){return m.addEventListener(d,h),{destroy:function(){m.removeEventListener(d,h)}}}function u(m,d,h){return Array.prototype.forEach.call(m,function(v){v.addEventListener(d,h)}),{destroy:function(){Array.prototype.forEach.call(m,function(v){v.removeEventListener(d,h)})}}}function p(m,d,h){return a(document.body,m,d,h)}n.exports=f},817:function(n){function o(i){var s;if(i.nodeName==="SELECT")i.focus(),s=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var a=i.hasAttribute("readonly");a||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),a||i.removeAttribute("readonly"),s=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var f=window.getSelection(),c=document.createRange();c.selectNodeContents(i),f.removeAllRanges(),f.addRange(c),s=f.toString()}return s}n.exports=o},279:function(n){function o(){}o.prototype={on:function(i,s,a){var f=this.e||(this.e={});return(f[i]||(f[i]=[])).push({fn:s,ctx:a}),this},once:function(i,s,a){var f=this;function c(){f.off(i,c),s.apply(a,arguments)}return c._=s,this.on(i,c,a)},emit:function(i){var s=[].slice.call(arguments,1),a=((this.e||(this.e={}))[i]||[]).slice(),f=0,c=a.length;for(f;f{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var rs=/["'&<>]/;Yo.exports=ns;function ns(e){var t=""+e,r=rs.exec(t);if(!r)return t;var n,o="",i=0,s=0;for(i=r.index;i0&&i[i.length-1])&&(c[0]===6||c[0]===2)){r=0;continue}if(c[0]===3&&(!i||c[1]>i[0]&&c[1]=e.length&&(e=void 0),{value:e&&e[n++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function W(e,t){var r=typeof Symbol=="function"&&e[Symbol.iterator];if(!r)return e;var n=r.call(e),o,i=[],s;try{for(;(t===void 0||t-- >0)&&!(o=n.next()).done;)i.push(o.value)}catch(a){s={error:a}}finally{try{o&&!o.done&&(r=n.return)&&r.call(n)}finally{if(s)throw s.error}}return i}function D(e,t,r){if(r||arguments.length===2)for(var n=0,o=t.length,i;n1||a(m,d)})})}function a(m,d){try{f(n[m](d))}catch(h){p(i[0][3],h)}}function f(m){m.value instanceof et?Promise.resolve(m.value.v).then(c,u):p(i[0][2],m)}function c(m){a("next",m)}function u(m){a("throw",m)}function p(m,d){m(d),i.shift(),i.length&&a(i[0][0],i[0][1])}}function pn(e){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var t=e[Symbol.asyncIterator],r;return t?t.call(e):(e=typeof Ee=="function"?Ee(e):e[Symbol.iterator](),r={},n("next"),n("throw"),n("return"),r[Symbol.asyncIterator]=function(){return this},r);function n(i){r[i]=e[i]&&function(s){return new Promise(function(a,f){s=e[i](s),o(a,f,s.done,s.value)})}}function o(i,s,a,f){Promise.resolve(f).then(function(c){i({value:c,done:a})},s)}}function C(e){return typeof e=="function"}function at(e){var t=function(n){Error.call(n),n.stack=new Error().stack},r=e(t);return r.prototype=Object.create(Error.prototype),r.prototype.constructor=r,r}var It=at(function(e){return function(r){e(this),this.message=r?r.length+` errors occurred during unsubscription: +`+r.map(function(n,o){return o+1+") "+n.toString()}).join(` + `):"",this.name="UnsubscriptionError",this.errors=r}});function Ve(e,t){if(e){var r=e.indexOf(t);0<=r&&e.splice(r,1)}}var Ie=function(){function e(t){this.initialTeardown=t,this.closed=!1,this._parentage=null,this._finalizers=null}return e.prototype.unsubscribe=function(){var t,r,n,o,i;if(!this.closed){this.closed=!0;var s=this._parentage;if(s)if(this._parentage=null,Array.isArray(s))try{for(var a=Ee(s),f=a.next();!f.done;f=a.next()){var c=f.value;c.remove(this)}}catch(v){t={error:v}}finally{try{f&&!f.done&&(r=a.return)&&r.call(a)}finally{if(t)throw t.error}}else s.remove(this);var u=this.initialTeardown;if(C(u))try{u()}catch(v){i=v instanceof It?v.errors:[v]}var p=this._finalizers;if(p){this._finalizers=null;try{for(var m=Ee(p),d=m.next();!d.done;d=m.next()){var h=d.value;try{ln(h)}catch(v){i=i!=null?i:[],v instanceof It?i=D(D([],W(i)),W(v.errors)):i.push(v)}}}catch(v){n={error:v}}finally{try{d&&!d.done&&(o=m.return)&&o.call(m)}finally{if(n)throw n.error}}}if(i)throw new It(i)}},e.prototype.add=function(t){var r;if(t&&t!==this)if(this.closed)ln(t);else{if(t instanceof e){if(t.closed||t._hasParent(this))return;t._addParent(this)}(this._finalizers=(r=this._finalizers)!==null&&r!==void 0?r:[]).push(t)}},e.prototype._hasParent=function(t){var r=this._parentage;return r===t||Array.isArray(r)&&r.includes(t)},e.prototype._addParent=function(t){var r=this._parentage;this._parentage=Array.isArray(r)?(r.push(t),r):r?[r,t]:t},e.prototype._removeParent=function(t){var r=this._parentage;r===t?this._parentage=null:Array.isArray(r)&&Ve(r,t)},e.prototype.remove=function(t){var r=this._finalizers;r&&Ve(r,t),t instanceof e&&t._removeParent(this)},e.EMPTY=function(){var t=new e;return t.closed=!0,t}(),e}();var Sr=Ie.EMPTY;function jt(e){return e instanceof Ie||e&&"closed"in e&&C(e.remove)&&C(e.add)&&C(e.unsubscribe)}function ln(e){C(e)?e():e.unsubscribe()}var Le={onUnhandledError:null,onStoppedNotification:null,Promise:void 0,useDeprecatedSynchronousErrorHandling:!1,useDeprecatedNextContext:!1};var st={setTimeout:function(e,t){for(var r=[],n=2;n0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var n=this,o=this,i=o.hasError,s=o.isStopped,a=o.observers;return i||s?Sr:(this.currentObservers=null,a.push(r),new Ie(function(){n.currentObservers=null,Ve(a,r)}))},t.prototype._checkFinalizedStatuses=function(r){var n=this,o=n.hasError,i=n.thrownError,s=n.isStopped;o?r.error(i):s&&r.complete()},t.prototype.asObservable=function(){var r=new F;return r.source=this,r},t.create=function(r,n){return new xn(r,n)},t}(F);var xn=function(e){ie(t,e);function t(r,n){var o=e.call(this)||this;return o.destination=r,o.source=n,o}return t.prototype.next=function(r){var n,o;(o=(n=this.destination)===null||n===void 0?void 0:n.next)===null||o===void 0||o.call(n,r)},t.prototype.error=function(r){var n,o;(o=(n=this.destination)===null||n===void 0?void 0:n.error)===null||o===void 0||o.call(n,r)},t.prototype.complete=function(){var r,n;(n=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||n===void 0||n.call(r)},t.prototype._subscribe=function(r){var n,o;return(o=(n=this.source)===null||n===void 0?void 0:n.subscribe(r))!==null&&o!==void 0?o:Sr},t}(x);var Et={now:function(){return(Et.delegate||Date).now()},delegate:void 0};var wt=function(e){ie(t,e);function t(r,n,o){r===void 0&&(r=1/0),n===void 0&&(n=1/0),o===void 0&&(o=Et);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=n,i._timestampProvider=o,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=n===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,n),i}return t.prototype.next=function(r){var n=this,o=n.isStopped,i=n._buffer,s=n._infiniteTimeWindow,a=n._timestampProvider,f=n._windowTime;o||(i.push(r),!s&&i.push(a.now()+f)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var n=this._innerSubscribe(r),o=this,i=o._infiniteTimeWindow,s=o._buffer,a=s.slice(),f=0;f0?e.prototype.requestAsyncId.call(this,r,n,o):(r.actions.push(this),r._scheduled||(r._scheduled=ut.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,n,o){var i;if(o===void 0&&(o=0),o!=null?o>0:this.delay>0)return e.prototype.recycleAsyncId.call(this,r,n,o);var s=r.actions;n!=null&&((i=s[s.length-1])===null||i===void 0?void 0:i.id)!==n&&(ut.cancelAnimationFrame(n),r._scheduled=void 0)},t}(Wt);var Sn=function(e){ie(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var n=this._scheduled;this._scheduled=void 0;var o=this.actions,i;r=r||o.shift();do if(i=r.execute(r.state,r.delay))break;while((r=o[0])&&r.id===n&&o.shift());if(this._active=!1,i){for(;(r=o[0])&&r.id===n&&o.shift();)r.unsubscribe();throw i}},t}(Dt);var Oe=new Sn(wn);var M=new F(function(e){return e.complete()});function Vt(e){return e&&C(e.schedule)}function Cr(e){return e[e.length-1]}function Ye(e){return C(Cr(e))?e.pop():void 0}function Te(e){return Vt(Cr(e))?e.pop():void 0}function zt(e,t){return typeof Cr(e)=="number"?e.pop():t}var pt=function(e){return e&&typeof e.length=="number"&&typeof e!="function"};function Nt(e){return C(e==null?void 0:e.then)}function qt(e){return C(e[ft])}function Kt(e){return Symbol.asyncIterator&&C(e==null?void 0:e[Symbol.asyncIterator])}function Qt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function zi(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var Yt=zi();function Gt(e){return C(e==null?void 0:e[Yt])}function Bt(e){return un(this,arguments,function(){var r,n,o,i;return $t(this,function(s){switch(s.label){case 0:r=e.getReader(),s.label=1;case 1:s.trys.push([1,,9,10]),s.label=2;case 2:return[4,et(r.read())];case 3:return n=s.sent(),o=n.value,i=n.done,i?[4,et(void 0)]:[3,5];case 4:return[2,s.sent()];case 5:return[4,et(o)];case 6:return[4,s.sent()];case 7:return s.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function Jt(e){return C(e==null?void 0:e.getReader)}function U(e){if(e instanceof F)return e;if(e!=null){if(qt(e))return Ni(e);if(pt(e))return qi(e);if(Nt(e))return Ki(e);if(Kt(e))return On(e);if(Gt(e))return Qi(e);if(Jt(e))return Yi(e)}throw Qt(e)}function Ni(e){return new F(function(t){var r=e[ft]();if(C(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function qi(e){return new F(function(t){for(var r=0;r=2;return function(n){return n.pipe(e?A(function(o,i){return e(o,i,n)}):de,ge(1),r?He(t):Dn(function(){return new Zt}))}}function Vn(){for(var e=[],t=0;t=2,!0))}function pe(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new x}:t,n=e.resetOnError,o=n===void 0?!0:n,i=e.resetOnComplete,s=i===void 0?!0:i,a=e.resetOnRefCountZero,f=a===void 0?!0:a;return function(c){var u,p,m,d=0,h=!1,v=!1,Y=function(){p==null||p.unsubscribe(),p=void 0},B=function(){Y(),u=m=void 0,h=v=!1},N=function(){var O=u;B(),O==null||O.unsubscribe()};return y(function(O,Qe){d++,!v&&!h&&Y();var De=m=m!=null?m:r();Qe.add(function(){d--,d===0&&!v&&!h&&(p=$r(N,f))}),De.subscribe(Qe),!u&&d>0&&(u=new rt({next:function($e){return De.next($e)},error:function($e){v=!0,Y(),p=$r(B,o,$e),De.error($e)},complete:function(){h=!0,Y(),p=$r(B,s),De.complete()}}),U(O).subscribe(u))})(c)}}function $r(e,t){for(var r=[],n=2;ne.next(document)),e}function K(e,t=document){return Array.from(t.querySelectorAll(e))}function z(e,t=document){let r=ce(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function ce(e,t=document){return t.querySelector(e)||void 0}function _e(){return document.activeElement instanceof HTMLElement&&document.activeElement||void 0}function tr(e){return L(b(document.body,"focusin"),b(document.body,"focusout")).pipe(ke(1),l(()=>{let t=_e();return typeof t!="undefined"?e.contains(t):!1}),V(e===_e()),J())}function Xe(e){return{x:e.offsetLeft,y:e.offsetTop}}function Kn(e){return L(b(window,"load"),b(window,"resize")).pipe(Ce(0,Oe),l(()=>Xe(e)),V(Xe(e)))}function rr(e){return{x:e.scrollLeft,y:e.scrollTop}}function dt(e){return L(b(e,"scroll"),b(window,"resize")).pipe(Ce(0,Oe),l(()=>rr(e)),V(rr(e)))}var Yn=function(){if(typeof Map!="undefined")return Map;function e(t,r){var n=-1;return t.some(function(o,i){return o[0]===r?(n=i,!0):!1}),n}return function(){function t(){this.__entries__=[]}return Object.defineProperty(t.prototype,"size",{get:function(){return this.__entries__.length},enumerable:!0,configurable:!0}),t.prototype.get=function(r){var n=e(this.__entries__,r),o=this.__entries__[n];return o&&o[1]},t.prototype.set=function(r,n){var o=e(this.__entries__,r);~o?this.__entries__[o][1]=n:this.__entries__.push([r,n])},t.prototype.delete=function(r){var n=this.__entries__,o=e(n,r);~o&&n.splice(o,1)},t.prototype.has=function(r){return!!~e(this.__entries__,r)},t.prototype.clear=function(){this.__entries__.splice(0)},t.prototype.forEach=function(r,n){n===void 0&&(n=null);for(var o=0,i=this.__entries__;o0},e.prototype.connect_=function(){!Wr||this.connected_||(document.addEventListener("transitionend",this.onTransitionEnd_),window.addEventListener("resize",this.refresh),va?(this.mutationsObserver_=new MutationObserver(this.refresh),this.mutationsObserver_.observe(document,{attributes:!0,childList:!0,characterData:!0,subtree:!0})):(document.addEventListener("DOMSubtreeModified",this.refresh),this.mutationEventsAdded_=!0),this.connected_=!0)},e.prototype.disconnect_=function(){!Wr||!this.connected_||(document.removeEventListener("transitionend",this.onTransitionEnd_),window.removeEventListener("resize",this.refresh),this.mutationsObserver_&&this.mutationsObserver_.disconnect(),this.mutationEventsAdded_&&document.removeEventListener("DOMSubtreeModified",this.refresh),this.mutationsObserver_=null,this.mutationEventsAdded_=!1,this.connected_=!1)},e.prototype.onTransitionEnd_=function(t){var r=t.propertyName,n=r===void 0?"":r,o=ba.some(function(i){return!!~n.indexOf(i)});o&&this.refresh()},e.getInstance=function(){return this.instance_||(this.instance_=new e),this.instance_},e.instance_=null,e}(),Gn=function(e,t){for(var r=0,n=Object.keys(t);r0},e}(),Jn=typeof WeakMap!="undefined"?new WeakMap:new Yn,Xn=function(){function e(t){if(!(this instanceof e))throw new TypeError("Cannot call a class as a function.");if(!arguments.length)throw new TypeError("1 argument required, but only 0 present.");var r=ga.getInstance(),n=new La(t,r,this);Jn.set(this,n)}return e}();["observe","unobserve","disconnect"].forEach(function(e){Xn.prototype[e]=function(){var t;return(t=Jn.get(this))[e].apply(t,arguments)}});var Aa=function(){return typeof nr.ResizeObserver!="undefined"?nr.ResizeObserver:Xn}(),Zn=Aa;var eo=new x,Ca=$(()=>k(new Zn(e=>{for(let t of e)eo.next(t)}))).pipe(g(e=>L(ze,k(e)).pipe(R(()=>e.disconnect()))),X(1));function he(e){return{width:e.offsetWidth,height:e.offsetHeight}}function ye(e){return Ca.pipe(S(t=>t.observe(e)),g(t=>eo.pipe(A(({target:r})=>r===e),R(()=>t.unobserve(e)),l(()=>he(e)))),V(he(e)))}function bt(e){return{width:e.scrollWidth,height:e.scrollHeight}}function ar(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}var to=new x,Ra=$(()=>k(new IntersectionObserver(e=>{for(let t of e)to.next(t)},{threshold:0}))).pipe(g(e=>L(ze,k(e)).pipe(R(()=>e.disconnect()))),X(1));function sr(e){return Ra.pipe(S(t=>t.observe(e)),g(t=>to.pipe(A(({target:r})=>r===e),R(()=>t.unobserve(e)),l(({isIntersecting:r})=>r))))}function ro(e,t=16){return dt(e).pipe(l(({y:r})=>{let n=he(e),o=bt(e);return r>=o.height-n.height-t}),J())}var cr={drawer:z("[data-md-toggle=drawer]"),search:z("[data-md-toggle=search]")};function no(e){return cr[e].checked}function Ke(e,t){cr[e].checked!==t&&cr[e].click()}function Ue(e){let t=cr[e];return b(t,"change").pipe(l(()=>t.checked),V(t.checked))}function ka(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function Ha(){return L(b(window,"compositionstart").pipe(l(()=>!0)),b(window,"compositionend").pipe(l(()=>!1))).pipe(V(!1))}function oo(){let e=b(window,"keydown").pipe(A(t=>!(t.metaKey||t.ctrlKey)),l(t=>({mode:no("search")?"search":"global",type:t.key,claim(){t.preventDefault(),t.stopPropagation()}})),A(({mode:t,type:r})=>{if(t==="global"){let n=_e();if(typeof n!="undefined")return!ka(n,r)}return!0}),pe());return Ha().pipe(g(t=>t?M:e))}function le(){return new URL(location.href)}function ot(e){location.href=e.href}function io(){return new x}function ao(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)ao(e,r)}function _(e,t,...r){let n=document.createElement(e);if(t)for(let o of Object.keys(t))typeof t[o]!="undefined"&&(typeof t[o]!="boolean"?n.setAttribute(o,t[o]):n.setAttribute(o,""));for(let o of r)ao(n,o);return n}function fr(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function so(){return location.hash.substring(1)}function Dr(e){let t=_("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function Pa(e){return L(b(window,"hashchange"),e).pipe(l(so),V(so()),A(t=>t.length>0),X(1))}function co(e){return Pa(e).pipe(l(t=>ce(`[id="${t}"]`)),A(t=>typeof t!="undefined"))}function Vr(e){let t=matchMedia(e);return er(r=>t.addListener(()=>r(t.matches))).pipe(V(t.matches))}function fo(){let e=matchMedia("print");return L(b(window,"beforeprint").pipe(l(()=>!0)),b(window,"afterprint").pipe(l(()=>!1))).pipe(V(e.matches))}function zr(e,t){return e.pipe(g(r=>r?t():M))}function ur(e,t={credentials:"same-origin"}){return ue(fetch(`${e}`,t)).pipe(fe(()=>M),g(r=>r.status!==200?Ot(()=>new Error(r.statusText)):k(r)))}function We(e,t){return ur(e,t).pipe(g(r=>r.json()),X(1))}function uo(e,t){let r=new DOMParser;return ur(e,t).pipe(g(n=>n.text()),l(n=>r.parseFromString(n,"text/xml")),X(1))}function pr(e){let t=_("script",{src:e});return $(()=>(document.head.appendChild(t),L(b(t,"load"),b(t,"error").pipe(g(()=>Ot(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(l(()=>{}),R(()=>document.head.removeChild(t)),ge(1))))}function po(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function lo(){return L(b(window,"scroll",{passive:!0}),b(window,"resize",{passive:!0})).pipe(l(po),V(po()))}function mo(){return{width:innerWidth,height:innerHeight}}function ho(){return b(window,"resize",{passive:!0}).pipe(l(mo),V(mo()))}function bo(){return G([lo(),ho()]).pipe(l(([e,t])=>({offset:e,size:t})),X(1))}function lr(e,{viewport$:t,header$:r}){let n=t.pipe(ee("size")),o=G([n,r]).pipe(l(()=>Xe(e)));return G([r,t,o]).pipe(l(([{height:i},{offset:s,size:a},{x:f,y:c}])=>({offset:{x:s.x-f,y:s.y-c+i},size:a})))}(()=>{function e(n,o){parent.postMessage(n,o||"*")}function t(...n){return n.reduce((o,i)=>o.then(()=>new Promise(s=>{let a=document.createElement("script");a.src=i,a.onload=s,document.body.appendChild(a)})),Promise.resolve())}var r=class extends EventTarget{constructor(n){super(),this.url=n,this.m=i=>{i.source===this.w&&(this.dispatchEvent(new MessageEvent("message",{data:i.data})),this.onmessage&&this.onmessage(i))},this.e=(i,s,a,f,c)=>{if(s===`${this.url}`){let u=new ErrorEvent("error",{message:i,filename:s,lineno:a,colno:f,error:c});this.dispatchEvent(u),this.onerror&&this.onerror(u)}};let o=document.createElement("iframe");o.hidden=!0,document.body.appendChild(this.iframe=o),this.w.document.open(),this.w.document.write(` + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Config Services

+

Overview

+

Config services are a newer version of services for CORE, that leverage a +templating engine, for more robust service file creation. They also +have the power of configuration key/value pairs that values that can be +defined and displayed within the GUI, to help further tweak a service, +as needed.

+

CORE services are a convenience for creating reusable dynamic scripts +to run on nodes, for carrying out specific task(s).

+

This boilds down to the following functions:

+
    +
  • generating files the service will use, either directly for commands or for configuration
  • +
  • command(s) for starting a service
  • +
  • command(s) for validating a service
  • +
  • command(s) for stopping a service
  • +
+

Most CORE nodes will have a default set of services to run, associated with +them. You can however customize the set of services a node will use. Or even +further define a new node type within the GUI, with a set of services, that +will allow quickly dragging and dropping that node type during creation.

+

Available Services

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Service GroupServices
BIRDBGP, OSPF, RADV, RIP, Static
EMANETransport Service
FRRBABEL, BGP, OSPFv2, OSPFv3, PIMD, RIP, RIPNG, Zebra
NRLarouted, MGEN Sink, MGEN Actor, NHDP, OLSR, OLSRORG, OLSRv2, SMF
QuaggaBABEL, BGP, OSPFv2, OSPFv3, OSPFv3 MDR, RIP, RIPNG, XPIMD, Zebra
SDNOVS, RYU
SecurityFirewall, IPsec, NAT, VPN Client, VPN Server
UtilityATD, Routing Utils, DHCP, FTP, IP Forward, PCAP, RADVD, SSF, UCARP
XORPBGP, OLSR, OSPFv2, OSPFv3, PIMSM4, PIMSM6, RIP, RIPNG, Router Manager
+

Node Types and Default Services

+

Here are the default node types and their services:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Node TypeServices
routerzebra, OSFPv2, OSPFv3, and IPForward services for IGP link-state routing.
PCDefaultRoute service for having a default route when connected directly to a router.
mdrzebra, OSPFv3MDR, and IPForward services for wireless-optimized MANET Designated Router routing.
proutera physical router, having the same default services as the router node type; for incorporating Linux testbed machines into an emulation.
+

Configuration files can be automatically generated by each service. For +example, CORE automatically generates routing protocol configuration for the +router nodes in order to simplify the creation of virtual networks.

+

To change the services associated with a node, double-click on the node to +invoke its configuration dialog and click on the Services... button, +or right-click a node a choose Services... from the menu. +Services are enabled or disabled by clicking on their names. The button next to +each service name allows you to customize all aspects of this service for this +node. For example, special route redistribution commands could be inserted in +to the Quagga routing configuration associated with the zebra service.

+

To change the default services associated with a node type, use the Node Types +dialog available from the Edit button at the end of the Layer-3 nodes +toolbar, or choose Node types... from the Session menu. Note that +any new services selected are not applied to existing nodes if the nodes have +been customized.

+

The node types are saved in the GUI config file ~/.coregui/config.yaml. +Keep this in mind when changing the default services for +existing node types; it may be better to simply create a new node type. It is +recommended that you do not change the default built-in node types.

+

New Services

+

Services can save time required to configure nodes, especially if a number +of nodes require similar configuration procedures. New services can be +introduced to automate tasks.

+

Creating New Services

+
+

Note

+

The directory base name used in custom_services_dir below should +be unique and should not correspond to any existing Python module name. +For example, don't use the name subprocess or services.

+
+
    +
  1. +

    Modify the example service shown below + to do what you want. It could generate config/script files, mount per-node + directories, start processes/scripts, etc. Your file can define one or more + classes to be imported. You can create multiple Python files that will be imported.

    +
  2. +
  3. +

    Put these files in a directory such as ~/.coregui/custom_services.

    +
  4. +
  5. +

    Add a custom_config_services_dir = ~/.coregui/custom_services entry to the + /etc/core/core.conf file.

    +
  6. +
  7. +

    Restart the CORE daemon (core-daemon). Any import errors (Python syntax) + should be displayed in the terminal (or service log, like journalctl).

    +
  8. +
  9. +

    Start using your custom service on your nodes. You can create a new node + type that uses your service, or change the default services for an existing + node type, or change individual nodes.

    +
  10. +
+

Example Custom Service

+

Below is the skeleton for a custom service with some documentation. Most +people would likely only setup the required class variables (name/group). +Then define the files to generate and implement the +get_text_template function to dynamically create the files wanted. Finally, +the startup commands would be supplied, which typically tend to be +running the shell files generated.

+
from typing import Dict, List
+
+from core.config import ConfigString, ConfigBool, Configuration
+from core.configservice.base import ConfigService, ConfigServiceMode, ShadowDir
+
+
+# class that subclasses ConfigService
+class ExampleService(ConfigService):
+    # unique name for your service within CORE
+    name: str = "Example"
+    # the group your service is associated with, used for display in GUI
+    group: str = "ExampleGroup"
+    # directories that the service should shadow mount, hiding the system directory
+    directories: List[str] = [
+        "/usr/local/core",
+    ]
+    # files that this service should generate, defaults to nodes home directory
+    # or can provide an absolute path to a mounted directory
+    files: List[str] = [
+        "example-start.sh",
+        "/usr/local/core/file1",
+    ]
+    # executables that should exist on path, that this service depends on
+    executables: List[str] = []
+    # other services that this service depends on, can be used to define service start order
+    dependencies: List[str] = []
+    # commands to run to start this service
+    startup: List[str] = []
+    # commands to run to validate this service
+    validate: List[str] = []
+    # commands to run to stop this service
+    shutdown: List[str] = []
+    # validation mode, blocking, non-blocking, and timer
+    validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING
+    # configurable values that this service can use, for file generation
+    default_configs: List[Configuration] = [
+        ConfigString(id="value1", label="Text"),
+        ConfigBool(id="value2", label="Boolean"),
+        ConfigString(id="value3", label="Multiple Choice", options=["value1", "value2", "value3"]),
+    ]
+    # sets of values to set for the configuration defined above, can be used to
+    # provide convenient sets of values to typically use
+    modes: Dict[str, Dict[str, str]] = {
+        "mode1": {"value1": "value1", "value2": "0", "value3": "value2"},
+        "mode2": {"value1": "value2", "value2": "1", "value3": "value3"},
+        "mode3": {"value1": "value3", "value2": "0", "value3": "value1"},
+    }
+    # defines directories that this service can help shadow within a node
+    shadow_directories: List[ShadowDir] = [
+        ShadowDir(path="/user/local/core", src="/opt/core")
+    ]
+
+    def get_text_template(self, name: str) -> str:
+        return """
+        # sample script 1
+        # node id(${node.id}) name(${node.name})
+        # config: ${config}
+        echo hello
+        """
+
+

Validation Mode

+

Validation modes are used to determine if a service has started up successfully.

+
    +
  • blocking - startup commands are expected to run til completion and return 0 exit code
  • +
  • non-blocking - startup commands are ran, but do not wait for completion
  • +
  • timer - startup commands are ran, and an arbitrary amount of time is waited to consider started
  • +
+

Shadow Directories

+

Shadow directories provide a convenience for copying a directory and the files within +it to a nodes home directory, to allow a unique set of per node files.

+
    +
  • ShadowDir(path="/user/local/core") - copies files at the given location into the node
  • +
  • ShadowDir(path="/user/local/core", src="/opt/core") - copies files to the given location, + but sourced from the provided location
  • +
  • ShadowDir(path="/user/local/core", templates=True) - copies files and treats them as + templates for generation
  • +
  • ShadowDir(path="/user/local/core", has_node_paths=True) - copies files from the given + location, and looks for unique node names directories within it, using a directory named + default, when not preset
  • +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/ctrlnet.html b/ctrlnet.html new file mode 100644 index 00000000..bc541768 --- /dev/null +++ b/ctrlnet.html @@ -0,0 +1,1503 @@ + + + + + + + + + + + + + + + + + + + + + + Control Network - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+ +
+ + + +
+
+ + + + +

CORE Control Network

+

Overview

+

The CORE control network allows the virtual nodes to communicate with their +host environment. There are two types: the primary control network and +auxiliary control networks. The primary control network is used mainly for +communicating with the virtual nodes from host machines and for master-slave +communications in a multi-server distributed environment. Auxiliary control +networks have been introduced to for routing namespace hosted emulation +software traffic to the test network.

+

Activating the Primary Control Network

+

Under the Session Menu, the Options... dialog has an option to set a +control network prefix.

+

This can be set to a network prefix such as 172.16.0.0/24. A bridge will +be created on the host machine having the last address in the prefix range +(e.g. 172.16.0.254), and each node will have an extra ctrl0 control +interface configured with an address corresponding to its node number +(e.g. 172.16.0.3 for n3.)

+

A default for the primary control network may also be specified by setting +the controlnet line in the /etc/core/core.conf configuration file which +new sessions will use by default. To simultaneously run multiple sessions with +control networks, the session option should be used instead of the core.conf +default.

+
+

Note

+

If you have a large scenario with more than 253 nodes, use a control +network prefix that allows more than the suggested /24, such as /23 or +greater.

+
+
+

Note

+

Running a session with a control network can fail if a previous +session has set up a control network and the its bridge is still up. Close +the previous session first or wait for it to complete. If unable to, the +core-daemon may need to be restarted and the lingering bridge(s) removed +manually.

+
+
# Restart the CORE Daemon
+sudo /etc/init.d core-daemon restart
+
+# Remove lingering control network bridges
+ctrlbridges=`brctl show | grep b.ctrl | awk '{print $1}'`
+for cb in $ctrlbridges; do
+  sudo ifconfig $cb down
+  sudo brctl delbr $cb
+done
+
+
+

Note

+

If adjustments to the primary control network configuration made in +/etc/core/core.conf do not seem to take affect, check if there is anything +set in the Session Menu, the Options... dialog. They may need to be +cleared. These per session settings override the defaults in +/etc/core/core.conf.

+
+

Control Network in Distributed Sessions

+

When the primary control network is activated for a distributed session, a +control network bridge will be created on each of the slave servers, with +GRE tunnels back to the master server's bridge. The slave control bridges +are not assigned an address. From the host, any of the nodes (local or remote) +can be accessed, just like the single server case.

+

In some situations, remote emulated nodes need to communicate with the host +on which they are running and not the master server. Multiple control network +prefixes can be specified in the either the session option or +/etc/core/core.conf, separated by spaces and beginning with the master +server. Each entry has the form "server:prefix". For example, if the servers +core1,core2, and core3 are assigned with nodes in the scenario and using +/etc/core/core.conf instead of the session option.

+
controlnet=core1:172.16.1.0/24 core2:172.16.2.0/24 core3:172.16.1.0/24
+
+

Then, the control network bridges will be assigned as follows:

+
    +
  • core1 = 172.16.1.254 (assuming it is the master server),
  • +
  • core2 = 172.16.2.254
  • +
  • core3 = 172.16.3.254
  • +
+

Tunnels back to the master server will still be built, but it is up to the +user to add appropriate routes if networking between control network prefixes +is desired. The control network script may help with this.

+

Control Network Script

+

A control network script may be specified using the controlnet_updown_script +option in the /etc/core/core.conf file. This script will be run after the +bridge has been built (and address assigned) with the first argument being the +name of the bridge, and the second argument being the keyword "startup". +The script will again be invoked prior to bridge removal with the second +argument being the keyword "shutdown".

+

Auxiliary Control Networks

+

Starting with EMANE 0.9.2, CORE will run EMANE instances within namespaces. +Since it is advisable to separate the OTA traffic from other traffic, we will +need more than single channel leading out from the namespace. Up to three +auxiliary control networks may be defined. Multiple control networks are set +up in /etc/core/core.conf file. Lines controlnet1, controlnet2 and +controlnet3 define the auxiliary networks.

+

For example, having the following /etc/core/core.conf:

+
controlnet = core1:172.17.1.0/24 core2:172.17.2.0/24 core3:172.17.3.0/24
+controlnet1 = core1:172.18.1.0/24 core2:172.18.2.0/24 core3:172.18.3.0/24
+controlnet2 = core1:172.19.1.0/24 core2:172.19.2.0/24 core3:172.19.3.0/24
+
+

This will activate the primary and two auxiliary control networks and add +interfaces ctrl0, ctrl1, ctrl2 to each node. One use case would be to +assign ctrl1 to the OTA manager device and ctrl2 to the Event Service +device in the EMANE Options dialog box and leave ctrl0 for CORE control +traffic.

+
+

Note

+

controlnet0 may be used in place of controlnet to configure +the primary control network.

+
+

Unlike the primary control network, the auxiliary control networks will not +employ tunneling since their primary purpose is for efficiently transporting +multicast EMANE OTA and event traffic. Note that there is no per-session +configuration for auxiliary control networks.

+

To extend the auxiliary control networks across a distributed test +environment, host network interfaces need to be added to them. The following +lines in /etc/core/core.conf will add host devices eth1, eth2 and eth3 +to controlnet1, controlnet2, controlnet3:

+
controlnetif1 = eth1
+controlnetif2 = eth2
+controlnetif3 = eth3
+
+
+

Note

+

There is no need to assign an interface to the primary control +network because tunnels are formed between the master and the slaves using IP +addresses that are provided in servers.conf.

+
+

Shown below is a representative diagram of the configuration above.

+

+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/devguide.html b/devguide.html new file mode 100644 index 00000000..57e11c6e --- /dev/null +++ b/devguide.html @@ -0,0 +1,1579 @@ + + + + + + + + + + + + + + + + + + + + Developers Guide - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

CORE Developer's Guide

+

Overview

+

The CORE source consists of several programming languages for +historical reasons. Current development focuses on the Python modules and +daemon. Here is a brief description of the source directories.

+ + + + + + + + + + + + + + + + + + + + + + + + + +
DirectoryDescription
daemonPython CORE daemon/gui code that handles receiving API calls and creating containers
docsMarkdown Documentation currently hosted on GitHub
manTemplate files for creating man pages for various CORE command line utilities
netnsC program for creating CORE containers
+

Getting started

+

To setup CORE for develop we will leverage to automated install script.

+

Clone CORE Repo

+
cd ~/Documents
+git clone https://github.com/coreemu/core.git
+cd core
+git checkout develop
+
+

Install the Development Environment

+

This command will automatically install system dependencies, clone and build OSPF-MDR, +build CORE, setup the CORE poetry environment, and install pre-commit hooks. You can +refer to the install docs for issues related to different distributions.

+
./install -d
+
+

pre-commit

+

pre-commit hooks help automate running tools to check modified code. Every time a commit is made +python utilities will be ran to check validity of code, potentially failing and backing out the commit. +These changes are currently mandated as part of the current CI, so add the changes and commit again.

+

Running CORE

+

You can now run core as you normally would, or leverage some of the invoke tasks to +conveniently run tests, etc.

+
# run core-daemon
+sudo core-daemon
+
+# run gui
+core-gui
+
+# run mocked unit tests
+cd <CORE_REPO>
+inv test-mock
+
+

Linux Network Namespace Commands

+

Linux network namespace containers are often managed using the Linux Container Tools or lxc-tools package. +The lxc-tools website is available here http://lxc.sourceforge.net/ for more information. CORE does not use these +management utilities, but includes its own set of tools for instantiating and configuring network namespace containers. +This section describes these tools.

+

vnoded

+

The vnoded daemon is the program used to create a new namespace, and listen on a control channel for commands that +may instantiate other processes. This daemon runs as PID 1 in the container. It is launched automatically by the CORE +daemon. The control channel is a UNIX domain socket usually named /tmp/pycore.23098/n3, for node 3 running on CORE +session 23098, for example. Root privileges are required for creating a new namespace.

+

vcmd

+

The vcmd program is used to connect to the vnoded daemon in a Linux network namespace, for running commands in the +namespace. The CORE daemon uses the same channel for setting up a node and running processes within it. This program +has two required arguments, the control channel name, and the command line to be run within the namespace. This command +does not need to run with root privileges.

+

When you double-click on a node in a running emulation, CORE will open a shell window for that node using a command +such as:

+
gnome-terminal -e vcmd -c /tmp/pycore.50160/n1 -- bash
+
+

Similarly, the IPv4 routes Observer Widget will run a command to display the routing table using a command such as:

+
vcmd -c /tmp/pycore.50160/n1 -- /sbin/ip -4 ro
+
+

core-cleanup script

+

A script named core-cleanup is provided to clean up any running CORE emulations. It will attempt to kill any +remaining vnoded processes, kill any EMANE processes, remove the :file:/tmp/pycore.* session directories, and remove +any bridges or nftables rules. With a -d option, it will also kill any running CORE daemon.

+

netns command

+

The netns command is not used by CORE directly. This utility can be used to run a command in a new network namespace +for testing purposes. It does not open a control channel for receiving further commands.

+

Other Useful Commands

+

Here are some other Linux commands that are useful for managing the Linux network namespace emulation.

+
# view the Linux bridging setup
+ip link show type bridge
+# view the netem rules used for applying link effects
+tc qdisc show
+# view the rules that make the wireless LAN work
+nft list ruleset
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/diagrams/architecture.plantuml b/diagrams/architecture.plantuml new file mode 100644 index 00000000..403886d9 --- /dev/null +++ b/diagrams/architecture.plantuml @@ -0,0 +1,44 @@ +@startuml +skinparam { + RoundCorner 8 + ComponentStyle uml2 + ComponentBorderColor #Black + InterfaceBorderColor #Black + InterfaceBackgroundColor #Yellow +} + +package User { + component "core-gui" as gui #DeepSkyBlue + component "python scripts" as scripts #DeepSkyBlue + component vcmd #DeepSkyBlue +} +package Server { + component "core-daemon" as daemon #DarkSeaGreen +} +package Python { + component core #LightSteelBlue +} +package "Linux System" { + component nodes #SpringGreen [ + nodes + (linux namespaces) + ] + component links #SpringGreen [ + links + (bridging and traffic manipulation) + ] +} + +package API { + interface gRPC as grpc +} + +gui <..> grpc +scripts <..> grpc +grpc -- daemon +scripts -- core +daemon - core +core <..> nodes +core <..> links +vcmd <..> nodes +@enduml diff --git a/diagrams/workflow.plantuml b/diagrams/workflow.plantuml new file mode 100644 index 00000000..cff943ad --- /dev/null +++ b/diagrams/workflow.plantuml @@ -0,0 +1,40 @@ +@startuml +skinparam { + RoundCorner 8 + StateBorderColor #Black + StateBackgroundColor #LightSteelBlue +} + +Definition: Session XML +Definition: GUI Drawing +Definition: Scripts + +Configuration: Configure Hooks +Configuration: Configure Services +Configuration: Configure WLAN / Mobility +Configuration: Configure EMANE + +Instantiation: Create Nodes +Instantiation: Create Interfaces +Instantiation: Create Bridges +Instantiation: Start Services + +Runtime: Interactive Shells +Runtime: Traffic Scripts +Runtime: Mobility +Runtime: Widgets + +Datacollect: Collect Files +Datacollect: Other Results + +Shutdown: Shutdown Services +Shutdown: Destroy Brdges +Shutdown: Destroy Interfaces +Shutdown: Destroy Nodes + +Definition -> Configuration +Configuration -> Instantiation +Instantiation -> Runtime +Runtime -> Datacollect +Datacollect -> Shutdown +@enduml diff --git a/distributed.html b/distributed.html new file mode 100644 index 00000000..39cba593 --- /dev/null +++ b/distributed.html @@ -0,0 +1,1639 @@ + + + + + + + + + + + + + + + + + + + + + + Distributed - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + +

CORE - Distributed Emulation

+

Overview

+

A large emulation scenario can be deployed on multiple emulation servers and +controlled by a single GUI. The GUI, representing the entire topology, can be +run on one of the emulation servers or on a separate machine.

+

Each machine that will act as an emulation will require the installation of a +distributed CORE package and some configuration to allow SSH as root.

+

CORE Configuration

+

CORE configuration settings required for using distributed functionality.

+

Edit /etc/core/core.conf or specific configuration file being used.

+
# uncomment and set this to the address that remote servers
+# use to get back to the main host, example below
+distributed_address = 129.168.0.101
+
+

EMANE Specific Configurations

+

EMANE needs to have controlnet configured in core.conf in order to startup correctly. +The names before the addresses need to match the names of distributed servers configured.

+
controlnet = core1:172.16.1.0/24 core2:172.16.2.0/24 core3:172.16.3.0/24 core4:172.16.4.0/24 core5:172.16.5.0/24
+emane_event_generate = True
+
+

Configuring SSH

+

Distributed CORE works using the python fabric library to run commands on +remote servers over SSH.

+

Remote GUI Terminals

+

You need to have the same user defined on each server, since the user used +for these remote shells is the same user that is running the CORE GUI.

+

Edit -> Preferences... -> Terminal program:

+

Currently recommend setting this to xterm -e as the default +gnome-terminal will not work.

+

May need to install xterm if, not already installed.

+
sudo apt install xterm
+
+

Distributed Server SSH Configuration

+

First the distributed servers must be configured to allow passwordless root +login over SSH.

+

On distributed server:

+
# install openssh-server
+sudo apt install openssh-server
+
+# open sshd config
+vi /etc/ssh/sshd_config
+
+# verify these configurations in file
+PermitRootLogin yes
+PasswordAuthentication yes
+
+# if desired add/modify the following line to allow SSH to
+# accept all env variables
+AcceptEnv *
+
+# restart sshd
+sudo systemctl restart sshd
+
+

On master server:

+
# install package if needed
+sudo apt install openssh-client
+
+# generate ssh key if needed
+ssh-keygen -o -t rsa -b 4096 -f ~/.ssh/core
+
+# copy public key to authorized_keys file
+ssh-copy-id -i ~/.ssh/core root@server
+
+# configure fabric to use the core ssh key
+sudo vi /etc/fabric.yml
+
+# set configuration
+connect_kwargs: {"key_filename": "/home/user/.ssh/core"}
+
+

On distributed server:

+
# open sshd config
+vi /etc/ssh/sshd_config
+
+# change configuration for root login to without password
+PermitRootLogin without-password
+
+# restart sshd
+sudo systemctl restart sshd
+
+

Fabric Config File

+

Make sure the value used below is the absolute path to the file +generated above ~/.ssh/core"

+

Add/update the fabric configuration file /etc/fabric.yml:

+
connect_kwargs: { "key_filename": "/home/user/.ssh/core" }
+
+

Add Emulation Servers in GUI

+

Within the core-gui navigate to menu option:

+

Session -> Servers...

+

Within the dialog box presented, add or modify an existing server if present +to use the name, address, and port for the a server you plan to use.

+

Server configurations are loaded and written to in a configuration file for +the GUI.

+

Assigning Nodes

+

The user needs to assign nodes to emulation servers in the scenario. Making no +assignment means the node will be emulated on the master server +In the configuration window of every node, a drop-down box located between +the Node name and the Image button will select the name of the emulation +server. By default, this menu shows (none), indicating that the node will +be emulated locally on the master. When entering Execute mode, the CORE GUI +will deploy the node on its assigned emulation server.

+

Another way to assign emulation servers is to select one or more nodes using +the select tool (ctrl-click to select multiple), and right-click one of the +nodes and choose Assign to....

+

The CORE emulation servers dialog box may also be used to assign nodes to +servers. The assigned server name appears in parenthesis next to the node name. +To assign all nodes to one of the servers, click on the server name and then +the all nodes button. Servers that have assigned nodes are shown in blue in +the server list. Another option is to first select a subset of nodes, then open +the CORE emulation servers box and use the selected nodes button.

+

IMPORTANT: Leave the nodes unassigned if they are to be run on the master +server. Do not explicitly assign the nodes to the master server.

+

GUI Visualization

+

If there is a link between two nodes residing on different servers, the GUI +will draw the link with a dashed line.

+

Concerns and Limitations

+

Wireless nodes, i.e. those connected to a WLAN node, can be assigned to +different emulation servers and participate in the same wireless network +only if an EMANE model is used for the WLAN. The basic range model does +not work across multiple servers due to the Linux bridging and nftables +rules that are used.

+
+

Note

+

The basic range wireless model does not support distributed emulation, +but EMANE does.

+
+

When nodes are linked across servers core-daemons will automatically +create necessary tunnels between the nodes when executed. Care should be taken +to arrange the topology such that the number of tunnels is minimized. The +tunnels carry data between servers to connect nodes as specified in the topology. +These tunnels are created using GRE tunneling, similar to the Tunnel Tool.

+

Distributed Checklist

+
    +
  1. Install CORE on master server
  2. +
  3. Install distributed CORE package on all servers needed
  4. +
  5. Installed and configure public-key SSH access on all servers (if you want to use + double-click shells or Widgets.) for both the GUI user (for terminals) and root for running CORE commands
  6. +
  7. Update CORE configuration as needed
  8. +
  9. Choose the servers that participate in distributed emulation.
  10. +
  11. Assign nodes to desired servers, empty for master server.
  12. +
  13. Press the Start button to launch the distributed emulation.
  14. +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/docker.html b/docker.html new file mode 100644 index 00000000..0ec1b462 --- /dev/null +++ b/docker.html @@ -0,0 +1,1479 @@ + + + + + + + + + + + + + + + + + + + + + + Docker - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Docker Node Support

+

Overview

+

Provided below is some information for helping setup and use Docker +nodes within a CORE scenario.

+

Installation

+

Debian Systems

+
sudo apt install docker.io
+
+

RHEL Systems

+

Configuration

+

Custom configuration required to avoid iptable rules being added and removing +the need for the default docker network, since core will be orchestrating +connections between nodes.

+

Place the file below in /etc/docker/docker.json

+
{
+  "bridge": "none",
+  "iptables": false
+}
+
+

Group Setup

+

To use Docker nodes within the python GUI, you will need to make sure the +user running the GUI is a member of the docker group.

+
# add group if does not exist
+sudo groupadd docker
+
+# add user to group
+sudo usermod -aG docker $USER
+
+# to get this change to take effect, log out and back in or run the following
+newgrp docker
+
+

Image Requirements

+

Images used by Docker nodes in CORE need to have networking tools installed for +CORE to automate setup and configuration of the network within the container.

+

Example Dockerfile:

+
FROM ubuntu:latest
+RUN apt-get update
+RUN apt-get install -y iproute2 ethtool
+
+

Build image:

+
sudo docker build -t <name> .
+
+

Tools and Versions Tested With

+
    +
  • Docker version 18.09.5, build e8ff056
  • +
  • nsenter from util-linux 2.31.1
  • +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/emane.html b/emane.html new file mode 100644 index 00000000..d61b2bb1 --- /dev/null +++ b/emane.html @@ -0,0 +1,1684 @@ + + + + + + + + + + + + + + + + + + + + + + Overview - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

EMANE (Extendable Mobile Ad-hoc Network Emulator)

+

What is EMANE?

+

The Extendable Mobile Ad-hoc Network Emulator (EMANE) allows heterogeneous +network emulation using a pluggable MAC and PHY layer architecture. The +EMANE framework provides an implementation architecture for modeling +different radio interface types in the form of Network Emulation Modules +(NEMs) and incorporating these modules into a real-time emulation running +in a distributed environment.

+

EMANE is developed by U.S. Naval Research Labs (NRL) Code 5522 and Adjacent +Link LLC, who maintain these websites:

+ +

Instead of building Linux Ethernet bridging networks with CORE, +higher-fidelity wireless networks can be emulated using EMANE bound to virtual +devices. CORE emulates layers 3 and above (network, session, application) with +its virtual network stacks and process space for protocols and applications, +while EMANE emulates layers 1 and 2 (physical and data link) using its +pluggable PHY and MAC models.

+

The interface between CORE and EMANE is a TAP device. CORE builds the virtual +node using Linux network namespaces, installs the TAP device into the namespace +and instantiates one EMANE process in the namespace. The EMANE process binds a +user space socket to the TAP device for sending and receiving data from CORE.

+

An EMANE instance sends and receives OTA (Over-The-Air) traffic to and from +other EMANE instances via a control port (e.g. ctrl0, ctrl1). It also +sends and receives Events to and from the Event Service using the same or a +different control port. EMANE models are configured through the GUI's +configuration dialog. A corresponding EmaneModel Python class is sub-classed +for each supported EMANE model, to provide configuration items and their +mapping to XML files. This way new models can be easily supported. When +CORE starts the emulation, it generates the appropriate XML files that +specify the EMANE NEM configuration, and launches the EMANE daemons.

+

Some EMANE models support location information to determine when packets +should be dropped. EMANE has an event system where location events are +broadcast to all NEMs. CORE can generate these location events when nodes +are moved on the canvas. The canvas size and scale dialog has controls for +mapping the X,Y coordinate system to a latitude, longitude geographic system +that EMANE uses. When specified in the core.conf configuration file, CORE +can also subscribe to EMANE location events and move the nodes on the canvas +as they are moved in the EMANE emulation. This would occur when an Emulation +Script Generator, for example, is running a mobility script.

+

EMANE in CORE

+

This section will cover some high level topics and examples for running and +using EMANE in CORE.

+

You can find more detailed tutorials and examples at the +EMANE Tutorial.

+

Every topic below assumes CORE, EMANE, and OSPF MDR have been installed.

+
+

Info

+

Demo files will be found within the core-gui ~/.coregui/xmls directory

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TopicModelDescription
XML FilesRF PipeOverview of generated XML files used to drive EMANE
GPSDRF PipeOverview of running and integrating gpsd with EMANE
PrecomputedRF PipeOverview of using the precomputed propagation model
EELRF PipeOverview of using the Emulation Event Log (EEL) Generator
Antenna ProfilesRF PipeOverview of using antenna profiles in EMANE
+

EMANE Configuration

+

The CORE configuration file /etc/core/core.conf has options specific to +EMANE. An example emane section from the core.conf file is shown below:

+
# EMANE configuration
+emane_platform_port = 8101
+emane_transform_port = 8201
+emane_event_monitor = False
+#emane_models_dir = /home/<user>/.coregui/custom_emane
+# EMANE log level range [0,4] default: 2
+emane_log_level = 2
+emane_realtime = True
+# prefix used for emane installation
+# emane_prefix = /usr
+
+

If you have an EMANE event generator (e.g. mobility or pathloss scripts) and +want to have CORE subscribe to EMANE location events, set the following line +in the core.conf configuration file.

+
+

Note

+

Do not set this option to True if you want to manually drag nodes around +on the canvas to update their location in EMANE.

+
+
emane_event_monitor = True
+
+

Another common issue is if installing EMANE from source, the default configure +prefix will place the DTD files in /usr/local/share/emane/dtd while CORE +expects them in /usr/share/emane/dtd.

+

Update the EMANE prefix configuration to resolve this problem.

+
emane_prefix = /usr/local
+
+

Custom EMANE Models

+

CORE supports custom developed EMANE models by way of dynamically loading user +created python files that represent the model. Custom EMANE models should be +placed within the path defined by emane_models_dir in the CORE +configuration file. This path cannot end in /emane.

+

Here is an example model with documentation describing functionality:

+
"""
+Example custom emane model.
+"""
+from pathlib import Path
+from typing import Dict, Optional, Set, List
+
+from core.config import Configuration
+from core.emane import emanemanifest, emanemodel
+
+
+class ExampleModel(emanemodel.EmaneModel):
+    """
+    Custom emane model.
+
+    :cvar name: defines the emane model name that will show up in the GUI
+
+    Mac Definition:
+    :cvar mac_library: defines that mac library that the model will reference
+    :cvar mac_xml: defines the mac manifest file that will be parsed to obtain configuration options,
+        that will be displayed within the GUI
+    :cvar mac_defaults: allows you to override options that are maintained within the manifest file above
+    :cvar mac_config: parses the manifest file and converts configurations into core supported formats
+
+    Phy Definition:
+    NOTE: phy configuration will default to the universal model as seen below and the below section does not
+    have to be included
+    :cvar phy_library: defines that phy library that the model will reference, used if you need to
+        provide a custom phy
+    :cvar phy_xml: defines the phy manifest file that will be parsed to obtain configuration options,
+        that will be displayed within the GUI
+    :cvar phy_defaults: allows you to override options that are maintained within the manifest file above
+        or for the default universal model
+    :cvar phy_config: parses the manifest file and converts configurations into core supported formats
+
+    Custom Override Options:
+    NOTE: these options default to what's seen below and do not have to be included
+    :cvar config_ignore: allows you to ignore options within phy/mac, used typically if you needed to add
+        a custom option for display within the gui
+    """
+
+    name: str = "emane_example"
+    mac_library: str = "rfpipemaclayer"
+    mac_xml: str = "/usr/share/emane/manifest/rfpipemaclayer.xml"
+    mac_defaults: Dict[str, str] = {
+        "pcrcurveuri": "/usr/share/emane/xml/models/mac/rfpipe/rfpipepcr.xml"
+    }
+    mac_config: List[Configuration] = []
+    phy_library: Optional[str] = None
+    phy_xml: str = "/usr/share/emane/manifest/emanephy.xml"
+    phy_defaults: Dict[str, str] = {
+        "subid": "1", "propagationmodel": "2ray", "noisemode": "none"
+    }
+    phy_config: List[Configuration] = []
+    config_ignore: Set[str] = set()
+
+    @classmethod
+    def load(cls, emane_prefix: Path) -> None:
+        """
+        Called after being loaded within the EmaneManager. Provides configured
+        emane_prefix for parsing xml files.
+
+        :param emane_prefix: configured emane prefix path
+        :return: nothing
+        """
+        cls._load_platform_config(emane_prefix)
+        manifest_path = "share/emane/manifest"
+        # load mac configuration
+        mac_xml_path = emane_prefix / manifest_path / cls.mac_xml
+        cls.mac_config = emanemanifest.parse(mac_xml_path, cls.mac_defaults)
+        # load phy configuration
+        phy_xml_path = emane_prefix / manifest_path / cls.phy_xml
+        cls.phy_config = emanemanifest.parse(phy_xml_path, cls.phy_defaults)
+
+

Single PC with EMANE

+

This section describes running CORE and EMANE on a single machine. This is the +default mode of operation when building an EMANE network with CORE. The OTA +manager and Event service interface are set to use ctrl0 and the virtual +nodes use the primary control channel for communicating with one another. The +primary control channel is automatically activated when a scenario involves +EMANE. Using the primary control channel prevents your emulation session from +sending multicast traffic on your local network and interfering with other +EMANE users.

+

EMANE is configured through an EMANE node. Once a node is linked to an EMANE +cloud, the radio interface on that node may also be configured +separately (apart from the cloud.)

+

Right click on an EMANE node and select EMANE Config to open the configuration dialog. +The EMANE models should be listed here for selection. (You may need to restart the +CORE daemon if it was running prior to installing the EMANE Python bindings.)

+

When an EMANE model is selected, you can click on the models option button +causing the GUI to query the CORE daemon for configuration items. +Each model will have different parameters, refer to the +EMANE documentation for an explanation of each item. The defaults values are +presented in the dialog. Clicking Apply and Apply again will store the +EMANE model selections.

+

The RF-PIPE and IEEE 802.11abg models use a Universal PHY that supports +geographic location information for determining pathloss between nodes. A +default latitude and longitude location is provided by CORE and this +location-based pathloss is enabled by default; this is the pathloss mode +setting for the Universal PHY. Moving a node on the canvas while the +emulation is running generates location events for EMANE. To view or change +the geographic location or scale of the canvas use the Canvas Size and Scale +dialog available from the Canvas menu.

+

Note that conversion between geographic and Cartesian coordinate systems is +done using UTM (Universal Transverse Mercator) projection, where different +zones of 6 degree longitude bands are defined. The location events generated +by CORE may become inaccurate near the zone boundaries for very large scenarios +that span multiple UTM zones. It is recommended that EMANE location scripts be +used to achieve geo-location accuracy in this situation.

+

Clicking the green Start button launches the emulation and causes TAP devices +to be created in the virtual nodes that are linked to the EMANE WLAN. These +devices appear with interface names such as eth0, eth1, etc. The EMANE processes +should now be running in each namespace.

+

To view the configuration generated by CORE, look in the /tmp/pycore.nnnnn/ session +directory to find the generated EMANE xml files. One easy way to view +this information is by double-clicking one of the virtual nodes and listing the files +in the shell.

+

+

Distributed EMANE

+

Running CORE and EMANE distributed among two or more emulation servers is +similar to running on a single machine. There are a few key configuration +items that need to be set in order to be successful, and those are outlined here.

+

It is a good idea to maintain separate networks for data (OTA) and control. +The control network may be a shared laboratory network, for example, and you do +not want multicast traffic on the data network to interfere with other EMANE +users. Furthermore, control traffic could interfere with the OTA latency and +throughput and might affect emulation fidelity. The examples described here will +use eth0 as a control interface and eth1 as a data interface, although +using separate interfaces is not strictly required. Note that these interface +names refer to interfaces present on the host machine, not virtual interfaces +within a node.

+

IMPORTANT: If an auxiliary control network is used, an interface on the host +has to be assigned to that network.

+

Each machine that will act as an emulation server needs to have CORE distributed +and EMANE installed. As well as be setup to work for CORE distributed mode.

+

The IP addresses of the available servers are configured from the CORE +servers dialog box. The dialog shows available +servers, some or all of which may be assigned to nodes on the canvas.

+

Nodes need to be assigned to servers and can be done so using the node +configuration dialog. When a node is not assigned to any emulation server, +it will be emulated locally.

+

Using the EMANE node configuration dialog. You can change the EMANE model +being used, along with changing any configuration setting from their defaults.

+

+
+

Note

+

Here is a quick checklist for distributed emulation with EMANE.

+
+
    +
  1. Follow the steps outlined for normal CORE.
  2. +
  3. Assign nodes to desired servers
  4. +
  5. Synchronize your machine's clocks prior to starting the emulation, + using ntp or ptp. Some EMANE models are sensitive to timing.
  6. +
  7. Press the Start button to launch the distributed emulation.
  8. +
+

Now when the Start button is used to instantiate the emulation, the local CORE +daemon will connect to other emulation servers that have been assigned +to nodes. Each server will have its own session directory where the +platform.xml file and other EMANE XML files are generated. The NEM IDs are +automatically coordinated across servers so there is no overlap.

+

An Ethernet device is used for disseminating multicast EMANE events, as +specified in the configure emane dialog. EMANE's Event Service can be run +with mobility or pathloss scripts. +If CORE is not subscribed to location events, it will generate them as nodes +are moved on the canvas.

+

Double-clicking on a node during runtime will cause the GUI to attempt to SSH +to the emulation server for that node and run an interactive shell. The public +key SSH configuration should be tested with all emulation servers prior to +starting the emulation.

+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/emane/antenna.html b/emane/antenna.html new file mode 100644 index 00000000..797c6f51 --- /dev/null +++ b/emane/antenna.html @@ -0,0 +1,1840 @@ + + + + + + + + + + + + + + + + + + + + + + Antenna - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

EMANE Antenna Profiles

+

Overview

+

Introduction to using the EMANE antenna profile in CORE, based on the example +EMANE Demo linked below.

+

EMANE Demo 6 +for more specifics.

+

Demo Setup

+

We will need to create some files in advance of starting this session.

+

Create directory to place antenna profile files.

+
mkdir /tmp/emane
+
+

Create /tmp/emane/antennaprofile.xml with the following contents.

+
<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE profiles SYSTEM "file:///usr/share/emane/dtd/antennaprofile.dtd">
+<profiles>
+    <profile id="1"
+             antennapatternuri="/tmp/emane/antenna30dsector.xml"
+             blockagepatternuri="/tmp/emane/blockageaft.xml">
+        <placement north="0" east="0" up="0"/>
+    </profile>
+    <profile id="2"
+             antennapatternuri="/tmp/emane/antenna30dsector.xml">
+        <placement north="0" east="0" up="0"/>
+    </profile>
+</profiles>
+
+

Create /tmp/emane/antenna30dsector.xml with the following contents.

+
<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE antennaprofile SYSTEM "file:///usr/share/emane/dtd/antennaprofile.dtd">
+
+<!-- 30degree sector antenna pattern with main beam at +6dB and gain decreasing by 3dB every 5 degrees in elevation or bearing.-->
+<antennaprofile>
+    <antennapattern>
+        <elevation min='-90' max='-16'>
+            <bearing min='0' max='359'>
+                <gain value='-200'/>
+            </bearing>
+        </elevation>
+        <elevation min='-15' max='-11'>
+            <bearing min='0' max='5'>
+                <gain value='0'/>
+            </bearing>
+            <bearing min='6' max='10'>
+                <gain value='-3'/>
+            </bearing>
+            <bearing min='11' max='15'>
+                <gain value='-6'/>
+            </bearing>
+            <bearing min='16' max='344'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='345' max='349'>
+                <gain value='-6'/>
+            </bearing>
+            <bearing min='350' max='354'>
+                <gain value='-3'/>
+            </bearing>
+            <bearing min='355' max='359'>
+                <gain value='0'/>
+            </bearing>
+        </elevation>
+        <elevation min='-10' max='-6'>
+            <bearing min='0' max='5'>
+                <gain value='3'/>
+            </bearing>
+            <bearing min='6' max='10'>
+                <gain value='0'/>
+            </bearing>
+            <bearing min='11' max='15'>
+                <gain value='-3'/>
+            </bearing>
+            <bearing min='16' max='344'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='345' max='349'>
+                <gain value='-3'/>
+            </bearing>
+            <bearing min='350' max='354'>
+                <gain value='0'/>
+            </bearing>
+            <bearing min='355' max='359'>
+                <gain value='3'/>
+            </bearing>
+        </elevation>
+        <elevation min='-5' max='-1'>
+            <bearing min='0' max='5'>
+                <gain value='6'/>
+            </bearing>
+            <bearing min='6' max='10'>
+                <gain value='3'/>
+            </bearing>
+            <bearing min='11' max='15'>
+                <gain value='0'/>
+            </bearing>
+            <bearing min='16' max='344'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='345' max='349'>
+                <gain value='0'/>
+            </bearing>
+            <bearing min='350' max='354'>
+                <gain value='3'/>
+            </bearing>
+            <bearing min='355' max='359'>
+                <gain value='6'/>
+            </bearing>
+        </elevation>
+        <elevation min='0' max='5'>
+            <bearing min='0' max='5'>
+                <gain value='6'/>
+            </bearing>
+            <bearing min='6' max='10'>
+                <gain value='3'/>
+            </bearing>
+            <bearing min='11' max='15'>
+                <gain value='0'/>
+            </bearing>
+            <bearing min='16' max='344'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='345' max='349'>
+                <gain value='0'/>
+            </bearing>
+            <bearing min='350' max='354'>
+                <gain value='3'/>
+            </bearing>
+            <bearing min='355' max='359'>
+                <gain value='6'/>
+            </bearing>
+        </elevation>
+        <elevation min='6' max='10'>
+            <bearing min='0' max='5'>
+                <gain value='3'/>
+            </bearing>
+            <bearing min='6' max='10'>
+                <gain value='0'/>
+            </bearing>
+            <bearing min='11' max='15'>
+                <gain value='-3'/>
+            </bearing>
+            <bearing min='16' max='344'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='345' max='349'>
+                <gain value='-3'/>
+            </bearing>
+            <bearing min='350' max='354'>
+                <gain value='0'/>
+            </bearing>
+            <bearing min='355' max='359'>
+                <gain value='3'/>
+            </bearing>
+        </elevation>
+        <elevation min='11' max='15'>
+            <bearing min='0' max='5'>
+                <gain value='0'/>
+            </bearing>
+            <bearing min='6' max='10'>
+                <gain value='-3'/>
+            </bearing>
+            <bearing min='11' max='15'>
+                <gain value='-6'/>
+            </bearing>
+            <bearing min='16' max='344'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='345' max='349'>
+                <gain value='-6'/>
+            </bearing>
+            <bearing min='350' max='354'>
+                <gain value='-3'/>
+            </bearing>
+            <bearing min='355' max='359'>
+                <gain value='0'/>
+            </bearing>
+        </elevation>
+        <elevation min='16' max='90'>
+            <bearing min='0' max='359'>
+                <gain value='-200'/>
+            </bearing>
+        </elevation>
+    </antennapattern>
+</antennaprofile>
+
+

Create /tmp/emane/blockageaft.xml with the following contents.

+
<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE antennaprofile SYSTEM "file:///usr/share/emane/dtd/antennaprofile.dtd">
+
+<!-- blockage pattern: 1) entire aft in bearing (90 to 270) blocked 2) elevation below -10 blocked, 3) elevation from -10 to -1 is at -10dB to -1 dB 3) elevation from 0 to 90 no blockage-->
+<antennaprofile>
+    <blockagepattern>
+        <elevation min='-90' max='-11'>
+            <bearing min='0' max='359'>
+                <gain value='-200'/>
+            </bearing>
+        </elevation>
+        <elevation min='-10' max='-10'>
+            <bearing min='0' max='89'>
+                <gain value='-10'/>
+            </bearing>
+            <bearing min='90' max='270'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='271' max='359'>
+                <gain value='-10'/>
+            </bearing>
+        </elevation>
+        <elevation min='-9' max='-9'>
+            <bearing min='0' max='89'>
+                <gain value='-9'/>
+            </bearing>
+            <bearing min='90' max='270'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='271' max='359'>
+                <gain value='-9'/>
+            </bearing>
+        </elevation>
+        <elevation min='-8' max='-8'>
+            <bearing min='0' max='89'>
+                <gain value='-8'/>
+            </bearing>
+            <bearing min='90' max='270'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='271' max='359'>
+                <gain value='-8'/>
+            </bearing>
+        </elevation>
+        <elevation min='-7' max='-7'>
+            <bearing min='0' max='89'>
+                <gain value='-7'/>
+            </bearing>
+            <bearing min='90' max='270'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='271' max='359'>
+                <gain value='-7'/>
+            </bearing>
+        </elevation>
+        <elevation min='-6' max='-6'>
+            <bearing min='0' max='89'>
+                <gain value='-6'/>
+            </bearing>
+            <bearing min='90' max='270'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='271' max='359'>
+                <gain value='-6'/>
+            </bearing>
+        </elevation>
+        <elevation min='-5' max='-5'>
+            <bearing min='0' max='89'>
+                <gain value='-5'/>
+            </bearing>
+            <bearing min='90' max='270'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='271' max='359'>
+                <gain value='-5'/>
+            </bearing>
+        </elevation>
+        <elevation min='-4' max='-4'>
+            <bearing min='0' max='89'>
+                <gain value='-4'/>
+            </bearing>
+            <bearing min='90' max='270'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='271' max='359'>
+                <gain value='-4'/>
+            </bearing>
+        </elevation>
+        <elevation min='-3' max='-3'>
+            <bearing min='0' max='89'>
+                <gain value='-3'/>
+            </bearing>
+            <bearing min='90' max='270'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='271' max='359'>
+                <gain value='-3'/>
+            </bearing>
+        </elevation>
+        <elevation min='-2' max='-2'>
+            <bearing min='0' max='89'>
+                <gain value='-2'/>
+            </bearing>
+            <bearing min='90' max='270'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='271' max='359'>
+                <gain value='-2'/>
+            </bearing>
+        </elevation>
+        <elevation min='-1' max='-1'>
+            <bearing min='0' max='89'>
+                <gain value='-1'/>
+            </bearing>
+            <bearing min='90' max='270'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='271' max='359'>
+                <gain value='-1'/>
+            </bearing>
+        </elevation>
+        <elevation min='0' max='90'>
+            <bearing min='0' max='89'>
+                <gain value='0'/>
+            </bearing>
+            <bearing min='90' max='270'>
+                <gain value='-200'/>
+            </bearing>
+            <bearing min='271' max='359'>
+                <gain value='0'/>
+            </bearing>
+        </elevation>
+    </blockagepattern>
+</antennaprofile>
+
+

Run Demo

+
    +
  1. Select Open... within the GUI
  2. +
  3. Load emane-demo-antenna.xml
  4. +
  5. Click Start Button
  6. +
  7. After startup completes, double click n1 to bring up the nodes terminal
  8. +
+

Example Demo

+

This demo will cover running an EMANE event service to feed in antenna, +location, and pathloss events to demonstrate how antenna profiles +can be used.

+

EMANE Event Dump

+

On n1 lets dump EMANE events, so when we later run the EMANE event service +you can monitor when and what is sent.

+
root@n1:/tmp/pycore.44917/n1.conf# emaneevent-dump -i ctrl0
+
+

Send EMANE Events

+

On the host machine create the following to send EMANE events.

+
+

Warning

+

Make sure to set the eventservicedevice to the proper control +network value

+
+

Create eventservice.xml with the following contents.

+
<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE eventservice SYSTEM "file:///usr/share/emane/dtd/eventservice.dtd">
+<eventservice>
+    <param name="eventservicegroup" value="224.1.2.8:45703"/>
+    <param name="eventservicedevice" value="b.9001.da"/>
+    <generator definition="eelgenerator.xml"/>
+</eventservice>
+
+

Create eelgenerator.xml with the following contents.

+
<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE eventgenerator SYSTEM "file:///usr/share/emane/dtd/eventgenerator.dtd">
+<eventgenerator library="eelgenerator">
+    <param name="inputfile" value="scenario.eel"/>
+    <paramlist name="loader">
+        <item value="commeffect:eelloadercommeffect:delta"/>
+        <item value="location,velocity,orientation:eelloaderlocation:delta"/>
+        <item value="pathloss:eelloaderpathloss:delta"/>
+        <item value="antennaprofile:eelloaderantennaprofile:delta"/>
+    </paramlist>
+</eventgenerator>
+
+

Create scenario.eel with the following contents.

+
0.0 nem:1 antennaprofile 1,0.0,0.0
+0.0 nem:4 antennaprofile 2,0.0,0.0
+#
+0.0 nem:1  pathloss nem:2,60  nem:3,60   nem:4,60
+0.0 nem:2  pathloss nem:3,60  nem:4,60
+0.0 nem:3  pathloss nem:4,60
+#
+0.0 nem:1  location gps 40.025495,-74.315441,3.0
+0.0 nem:2  location gps 40.025495,-74.312501,3.0
+0.0 nem:3  location gps 40.023235,-74.315441,3.0
+0.0 nem:4  location gps 40.023235,-74.312501,3.0
+0.0 nem:4  velocity 180.0,0.0,10.0
+#
+30.0 nem:1 velocity 20.0,0.0,10.0
+30.0 nem:1 orientation 0.0,0.0,10.0
+30.0 nem:1 antennaprofile 1,60.0,0.0
+30.0 nem:4 velocity 270.0,0.0,10.0
+#
+60.0 nem:1 antennaprofile 1,105.0,0.0
+60.0 nem:4 antennaprofile 2,45.0,0.0
+#
+90.0 nem:1 velocity 90.0,0.0,10.0
+90.0 nem:1 orientation 0.0,0.0,0.0
+90.0 nem:1 antennaprofile 1,45.0,0.0
+
+

Run the EMANE event service, monitor what is output on n1 for events +dumped and see the link changes within the CORE GUI.

+
emaneeventservice -l 3 eventservice.xml
+
+

Stages

+

The events sent will trigger 4 different states.

+
    +
  • State 1
      +
    • n2 and n3 see each other
    • +
    • n4 and n3 are pointing away
    • +
    +
  • +
  • State 2
      +
    • n2 and n3 see each other
    • +
    • n1 and n2 see each other
    • +
    • n4 and n3 see each other
    • +
    +
  • +
  • State 3
      +
    • n2 and n3 see each other
    • +
    • n4 and n3 are pointing at each other but blocked
    • +
    +
  • +
  • State 4
      +
    • n2 and n3 see each other
    • +
    • n4 and n3 see each other
    • +
    +
  • +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/emane/eel.html b/emane/eel.html new file mode 100644 index 00000000..f46c8cfb --- /dev/null +++ b/emane/eel.html @@ -0,0 +1,1492 @@ + + + + + + + + + + + + + + + + + + + + + + EEL - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

EMANE Emulation Event Log (EEL) Generator

+

Overview

+

Introduction to using the EMANE event service and eel files to provide events.

+

EMANE Demo 1 +for more specifics.

+

Run Demo

+
    +
  1. Select Open... within the GUI
  2. +
  3. Load emane-demo-eel.xml
  4. +
  5. Click Start Button
  6. +
  7. After startup completes, double click n1 to bring up the nodes terminal
  8. +
+

Example Demo

+

This demo will go over defining an EMANE event service and eel file to drive +an emane event service.

+

Viewing Events

+

On n1 we will use the EMANE event dump utility to listen to events.

+
root@n1:/tmp/pycore.46777/n1.conf# emaneevent-dump -i ctrl0
+
+

Sending Events

+

On the host machine we will create the following files and start the +EMANE event service targeting the control network.

+
+

Warning

+

Make sure to set the eventservicedevice to the proper control +network value

+
+

Create eventservice.xml with the following contents.

+
<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE eventservice SYSTEM "file:///usr/share/emane/dtd/eventservice.dtd">
+<eventservice>
+    <param name="eventservicegroup" value="224.1.2.8:45703"/>
+    <param name="eventservicedevice" value="b.9001.f"/>
+    <generator definition="eelgenerator.xml"/>
+</eventservice>
+
+

Next we will create the eelgenerator.xml file. The EEL Generator is actually +a plugin that loads sentence parsing plugins. The sentence parsing plugins know +how to convert certain sentences, in this case commeffect, location, velocity, +orientation, pathloss and antennaprofile sentences, into their corresponding +emane event equivalents.

+
    +
  • commeffect:eelloadercommeffect:delta
  • +
  • location,velocity,orientation:eelloaderlocation:delta
  • +
  • pathloss:eelloaderpathloss:delta
  • +
  • antennaprofile:eelloaderantennaprofile:delta
  • +
+

These configuration items tell the EEL Generator which sentences to map to +which plugin and whether to issue delta or full updates.

+

Create eelgenerator.xml with the following contents.

+
<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE eventgenerator SYSTEM "file:///usr/share/emane/dtd/eventgenerator.dtd">
+<eventgenerator library="eelgenerator">
+    <param name="inputfile" value="scenario.eel"/>
+    <paramlist name="loader">
+        <item value="commeffect:eelloadercommeffect:delta"/>
+        <item value="location,velocity,orientation:eelloaderlocation:delta"/>
+        <item value="pathloss:eelloaderpathloss:delta"/>
+        <item value="antennaprofile:eelloaderantennaprofile:delta"/>
+    </paramlist>
+</eventgenerator>
+
+

Finally, create scenario.eel with the following contents.

+
0.0  nem:1 pathloss nem:2,90.0
+0.0  nem:2 pathloss nem:1,90.0
+0.0  nem:1 location gps 40.031075,-74.523518,3.000000
+0.0  nem:2 location gps 40.031165,-74.523412,3.000000
+
+

Start the EMANE event service using the files created above.

+
emaneeventservice eventservice.xml -l 3
+
+

Sent Events

+

If we go back to look at our original terminal we will see the events logged +out to the terminal.

+
root@n1:/tmp/pycore.46777/n1.conf# emaneevent-dump -i ctrl0
+[1601858142.917224] nem: 0 event: 100 len: 66 seq: 1 [Location]
+ UUID: 0af267be-17d3-4103-9f76-6f697e13bcec
+   (1, {'latitude': 40.031075, 'altitude': 3.0, 'longitude': -74.823518})
+   (2, {'latitude': 40.031165, 'altitude': 3.0, 'longitude': -74.523412})
+[1601858142.917466] nem: 1 event: 101 len: 14 seq: 2 [Pathloss]
+ UUID: 0af267be-17d3-4103-9f76-6f697e13bcec
+   (2, {'forward': 90.0, 'reverse': 90.0})
+[1601858142.917889] nem: 2 event: 101 len: 14 seq: 3 [Pathloss]
+ UUID: 0af267be-17d3-4103-9f76-6f697e13bcec
+   (1, {'forward': 90.0, 'reverse': 90.0})
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/emane/files.html b/emane/files.html new file mode 100644 index 00000000..aa20c548 --- /dev/null +++ b/emane/files.html @@ -0,0 +1,1626 @@ + + + + + + + + + + + + + + + + + + + + + + Files - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

EMANE XML Files

+

Overview

+

Introduction to the XML files generated by CORE used to drive EMANE for +a given node.

+

EMANE Demo 0 +may provide more helpful details.

+

Run Demo

+
    +
  1. Select Open... within the GUI
  2. +
  3. Load emane-demo-files.xml
  4. +
  5. Click Start Button
  6. +
  7. After startup completes, double click n1 to bring up the nodes terminal
  8. +
+

Example Demo

+

We will take a look at the files generated in the example demo provided. In this +case we are running the RF Pipe model.

+

Generated Files

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescription
\-platform.xmlconfiguration file for the emulator instances
\-nem.xmlconfiguration for creating a NEM
\-mac.xmlconfiguration for defining a NEMs MAC layer
\-phy.xmlconfiguration for defining a NEMs PHY layer
\-trans-virtual.xmlconfiguration when a virtual transport is being used
\-trans.xmlconfiguration when a raw transport is being used
+

Listing File

+

Below are the files within n1 after starting the demo session.

+
root@n1:/tmp/pycore.46777/n1.conf# ls
+eth0-mac.xml  eth0-trans-virtual.xml  n1-platform.xml       var.log
+eth0-nem.xml  ipforward.sh            quaggaboot.sh         var.run
+eth0-phy.xml  n1-emane.log            usr.local.etc.quagga  var.run.quagga
+
+

Platform XML

+

The root configuration file used to run EMANE for a node is the platform xml file. +In this demo we are looking at n1-platform.xml.

+
    +
  • lists all configuration values set for the platform
  • +
  • The unique nem id given for each interface that EMANE will create for this node
  • +
  • The path to the file(s) used for definition for a given nem
  • +
+
root@n1:/tmp/pycore.46777/n1.conf# cat n1-platform.xml
+<?xml version='1.0' encoding='UTF-8'?>
+<!DOCTYPE platform SYSTEM "file:///usr/share/emane/dtd/platform.dtd">
+<platform>
+  <param name="antennaprofilemanifesturi" value=""/>
+  <param name="controlportendpoint" value="0.0.0.0:47000"/>
+  <param name="eventservicedevice" value="ctrl0"/>
+  <param name="eventservicegroup" value="224.1.2.8:45703"/>
+  <param name="eventservicettl" value="1"/>
+  <param name="otamanagerchannelenable" value="1"/>
+  <param name="otamanagerdevice" value="ctrl0"/>
+  <param name="otamanagergroup" value="224.1.2.8:45702"/>
+  <param name="otamanagerloopback" value="0"/>
+  <param name="otamanagermtu" value="0"/>
+  <param name="otamanagerpartcheckthreshold" value="2"/>
+  <param name="otamanagerparttimeoutthreshold" value="5"/>
+  <param name="otamanagerttl" value="1"/>
+  <param name="stats.event.maxeventcountrows" value="0"/>
+  <param name="stats.ota.maxeventcountrows" value="0"/>
+  <param name="stats.ota.maxpacketcountrows" value="0"/>
+  <nem id="1" name="tap1.0.f" definition="eth0-nem.xml">
+    <transport definition="eth0-trans-virtual.xml">
+      <param name="device" value="eth0"/>
+    </transport>
+  </nem>
+</platform>
+
+

NEM XML

+

The nem definition will contain reference to the transport, mac, and phy xml +definitions being used for a given nem.

+
root@n1:/tmp/pycore.46777/n1.conf# cat eth0-nem.xml
+<?xml version='1.0' encoding='UTF-8'?>
+<!DOCTYPE nem SYSTEM "file:///usr/share/emane/dtd/nem.dtd">
+<nem name="emane_rfpipe NEM">
+  <transport definition="eth0-trans-virtual.xml"/>
+  <mac definition="eth0-mac.xml"/>
+  <phy definition="eth0-phy.xml"/>
+</nem>
+
+

MAC XML

+

MAC layer configuration settings would be found in this file. CORE will write +out all values, even if the value is a default value.

+
root@n1:/tmp/pycore.46777/n1.conf# cat eth0-mac.xml
+<?xml version='1.0' encoding='UTF-8'?>
+<!DOCTYPE mac SYSTEM "file:///usr/share/emane/dtd/mac.dtd">
+<mac name="emane_rfpipe MAC" library="rfpipemaclayer">
+  <param name="datarate" value="1000000"/>
+  <param name="delay" value="0.000000"/>
+  <param name="enablepromiscuousmode" value="0"/>
+  <param name="flowcontrolenable" value="0"/>
+  <param name="flowcontroltokens" value="10"/>
+  <param name="jitter" value="0.000000"/>
+  <param name="neighbormetricdeletetime" value="60.000000"/>
+  <param name="pcrcurveuri" value="/usr/share/emane/xml/models/mac/rfpipe/rfpipepcr.xml"/>
+  <param name="radiometricenable" value="0"/>
+  <param name="radiometricreportinterval" value="1.000000"/>
+</mac>
+
+

PHY XML

+

PHY layer configuration settings would be found in this file. CORE will write +out all values, even if the value is a default value.

+
root@n1:/tmp/pycore.46777/n1.conf# cat eth0-phy.xml
+<?xml version='1.0' encoding='UTF-8'?>
+<!DOCTYPE phy SYSTEM "file:///usr/share/emane/dtd/phy.dtd">
+<phy name="emane_rfpipe PHY">
+  <param name="bandwidth" value="1000000"/>
+  <param name="fading.model" value="none"/>
+  <param name="fading.nakagami.distance0" value="100.000000"/>
+  <param name="fading.nakagami.distance1" value="250.000000"/>
+  <param name="fading.nakagami.m0" value="0.750000"/>
+  <param name="fading.nakagami.m1" value="1.000000"/>
+  <param name="fading.nakagami.m2" value="200.000000"/>
+  <param name="fixedantennagain" value="0.000000"/>
+  <param name="fixedantennagainenable" value="1"/>
+  <param name="frequency" value="2347000000"/>
+  <param name="frequencyofinterest" value="2347000000"/>
+  <param name="noisebinsize" value="20"/>
+  <param name="noisemaxclampenable" value="0"/>
+  <param name="noisemaxmessagepropagation" value="200000"/>
+  <param name="noisemaxsegmentduration" value="1000000"/>
+  <param name="noisemaxsegmentoffset" value="300000"/>
+  <param name="noisemode" value="none"/>
+  <param name="propagationmodel" value="2ray"/>
+  <param name="subid" value="1"/>
+  <param name="systemnoisefigure" value="4.000000"/>
+  <param name="timesyncthreshold" value="10000"/>
+  <param name="txpower" value="0.000000"/>
+</phy>
+
+

Transport XML

+
root@n1:/tmp/pycore.46777/n1.conf# cat eth0-trans-virtual.xml
+<?xml version='1.0' encoding='UTF-8'?>
+<!DOCTYPE transport SYSTEM "file:///usr/share/emane/dtd/transport.dtd">
+<transport name="Virtual Transport" library="transvirtual">
+  <param name="bitrate" value="0"/>
+  <param name="devicepath" value="/dev/net/tun"/>
+</transport>
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/emane/gpsd.html b/emane/gpsd.html new file mode 100644 index 00000000..2b61d46b --- /dev/null +++ b/emane/gpsd.html @@ -0,0 +1,1476 @@ + + + + + + + + + + + + + + + + + + + + + + GPSD - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

EMANE GPSD Integration

+

Overview

+

Introduction to integrating gpsd in CORE with EMANE.

+

EMANE Demo 0 +may provide more helpful details.

+
+

Warning

+

Requires installation of gpsd

+
+

Run Demo

+
    +
  1. Select Open... within the GUI
  2. +
  3. Load emane-demo-gpsd.xml
  4. +
  5. Click Start Button
  6. +
  7. After startup completes, double click n1 to bring up the nodes terminal
  8. +
+

Example Demo

+

This section will cover how to run a gpsd location agent within EMANE, that will +write out locations to a pseudo terminal file. That file can be read in by the +gpsd server and make EMANE location events available to gpsd clients.

+

EMANE GPSD Event Daemon

+

First create an eventdaemon.xml file on n1 with the following contents.

+
<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE eventdaemon SYSTEM "file:///usr/share/emane/dtd/eventdaemon.dtd">
+<eventdaemon nemid="1">
+    <param name="eventservicegroup" value="224.1.2.8:45703"/>
+    <param name="eventservicedevice" value="ctrl0"/>
+    <agent definition="gpsdlocationagent.xml"/>
+</eventdaemon>
+
+

Then create the gpsdlocationagent.xml file on n1 with the following contents.

+
<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE eventagent SYSTEM "file:///usr/share/emane/dtd/eventagent.dtd">
+<eventagent library="gpsdlocationagent">
+    <param name="pseudoterminalfile" value="gps.pty"/>
+</eventagent>
+
+

Start the EMANE event agent. This will facilitate feeding location events +out to a pseudo terminal file defined above.

+
emaneeventd eventdaemon.xml -r -d -l 3 -f emaneeventd.log
+
+

Start gpsd, reading in the pseudo terminal file.

+
gpsd -G -n -b $(cat gps.pty)
+
+

EMANE EEL Event Daemon

+

EEL Events will be played out from the actual host machine over the designated +control network interface. Create the following files in the same directory +somewhere on your host.

+
+

Note

+

Make sure the below eventservicedevice matches the control network +device being used on the host for EMANE

+
+

Create eventservice.xml on the host machine with the following contents.

+
<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE eventservice SYSTEM "file:///usr/share/emane/dtd/eventservice.dtd">
+<eventservice>
+    <param name="eventservicegroup" value="224.1.2.8:45703"/>
+    <param name="eventservicedevice" value="b.9001.1"/>
+    <generator definition="eelgenerator.xml"/>
+</eventservice>
+
+

Create eelgenerator.xml on the host machine with the following contents.

+
<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE eventgenerator SYSTEM "file:///usr/share/emane/dtd/eventgenerator.dtd">
+<eventgenerator library="eelgenerator">
+    <param name="inputfile" value="scenario.eel"/>
+    <paramlist name="loader">
+        <item value="commeffect:eelloadercommeffect:delta"/>
+        <item value="location,velocity,orientation:eelloaderlocation:delta"/>
+        <item value="pathloss:eelloaderpathloss:delta"/>
+        <item value="antennaprofile:eelloaderantennaprofile:delta"/>
+    </paramlist>
+</eventgenerator>
+
+

Create scenario.eel file with the following contents.

+
0.0  nem:1 location gps 40.031075,-74.523518,3.000000
+0.0  nem:2 location gps 40.031165,-74.523412,3.000000
+
+

Start the EEL event service, which will send the events defined in the file above +over the control network to all EMANE nodes. These location events will be received +and provided to gpsd. This allows gpsd client to connect to and get gps locations.

+
emaneeventservice eventservice.xml -l 3
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/emane/precomputed.html b/emane/precomputed.html new file mode 100644 index 00000000..4ed54b55 --- /dev/null +++ b/emane/precomputed.html @@ -0,0 +1,1470 @@ + + + + + + + + + + + + + + + + + + + + + + Precomputed - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

EMANE Procomputed

+

Overview

+

Introduction to using the precomputed propagation model.

+

EMANE Demo 1 +for more specifics.

+

Run Demo

+
    +
  1. Select Open... within the GUI
  2. +
  3. Load emane-demo-precomputed.xml
  4. +
  5. Click Start Button
  6. +
  7. After startup completes, double click n1 to bring up the nodes terminal
  8. +
+

Example Demo

+

This demo is using the RF Pipe model with the propagation model set to +precomputed.

+

Failed Pings

+

Due to using precomputed and having not sent any pathloss events, the nodes +cannot ping each other yet.

+

Open a terminal on n1.

+
root@n1:/tmp/pycore.46777/n1.conf# ping 10.0.0.2
+connect: Network is unreachable
+
+

EMANE Shell

+

You can leverage emanesh to investigate why packets are being dropped.

+
root@n1:/tmp/pycore.46777/n1.conf# emanesh localhost get table nems phy BroadcastPacketDropTable0 UnicastPacketDropTable0
+nem 1   phy BroadcastPacketDropTable0
+| NEM | Out-of-Band | Rx Sensitivity | Propagation Model | Gain Location | Gain Horizon | Gain Profile | Not FOI | Spectrum Clamp | Fade Location | Fade Algorithm | Fade Select |
+| 2   | 0           | 0              | 169               | 0             | 0            | 0            | 0       | 0              | 0             | 0              | 0           |
+
+nem 1   phy UnicastPacketDropTable0
+| NEM | Out-of-Band | Rx Sensitivity | Propagation Model | Gain Location | Gain Horizon | Gain Profile | Not FOI | Spectrum Clamp | Fade Location | Fade Algorithm | Fade Select |
+
+

In the example above we can see that the reason packets are being dropped is due to +the propogation model and that is because we have not issued any pathloss events. +You can run another command to validate if you have received any pathloss events.

+
root@n1:/tmp/pycore.46777/n1.conf# emanesh localhost get table nems phy  PathlossEventInfoTable
+nem 1   phy PathlossEventInfoTable
+| NEM | Forward Pathloss | Reverse Pathloss |
+
+

Pathloss Events

+

On the host we will send pathloss events from all nems to all other nems.

+
+

Note

+

Make sure properly specify the right control network device

+
+
emaneevent-pathloss 1:2 90 -i <controlnet device>
+
+

Now if we check for pathloss events on n2 we will see what was just sent above.

+
root@n1:/tmp/pycore.46777/n1.conf# emanesh localhost get table nems phy  PathlossEventInfoTable
+nem 1   phy PathlossEventInfoTable
+| NEM | Forward Pathloss | Reverse Pathloss |
+| 2   | 90.0             | 90.0
+
+

You should also now be able to ping n1 from n2.

+
root@n1:/tmp/pycore.46777/n1.conf# ping -c 3 10.0.0.2
+PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
+64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=3.06 ms
+64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=2.12 ms
+64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=1.99 ms
+
+--- 10.0.0.2 ping statistics ---
+3 packets transmitted, 3 received, 0% packet loss, time 2001ms
+rtt min/avg/max/mdev = 1.991/2.393/3.062/0.479 ms
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/grpc.html b/grpc.html new file mode 100644 index 00000000..19916620 --- /dev/null +++ b/grpc.html @@ -0,0 +1,1899 @@ + + + + + + + + + + + + + + + + + + + + + + gRPC - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

gRPC

+ +
    +
  • Table of Contents
  • +
+

Overview

+

gRPC is a client/server API for interfacing with CORE +and used by the python GUI for driving all functionality. It is dependent +on having a running core-daemon instance to be leveraged.

+

A python client can be created from the raw generated grpc files included +with CORE or one can leverage a provided gRPC client that helps encapsulate +some functionality to try and help make things easier.

+

Python Client

+

A python client wrapper is provided at +CoreGrpcClient +to help provide some conveniences when using the API.

+

Client HTTP Proxy

+

Since gRPC is HTTP2 based, proxy configurations can cause issues. By default, +the client disables proxy support to avoid issues when a proxy is present. +You can enable and properly account for this issue when needed.

+

Proto Files

+

Proto files are used to define the API and protobuf messages that are used for +interfaces with this API.

+

They can be found +here +to see the specifics of +what is going on and response message values that would be returned.

+

Examples

+

Node Models

+

When creating nodes of type NodeType.DEFAULT these are the default models +and the services they map to.

+
    +
  • mdr
      +
    • zebra, OSPFv3MDR, IPForward
    • +
    +
  • +
  • PC
      +
    • DefaultRoute
    • +
    +
  • +
  • router
      +
    • zebra, OSPFv2, OSPFv3, IPForward
    • +
    +
  • +
  • host
      +
    • DefaultRoute, SSH
    • +
    +
  • +
+

Interface Helper

+

There is an interface helper class that can be leveraged for convenience +when creating interface data for nodes. Alternatively one can manually create +a core.api.grpc.wrappers.Interface class instead with appropriate information.

+

Manually creating gRPC client interface:

+
from core.api.grpc.wrappers import Interface
+
+# id is optional and will set to the next available id
+# name is optional and will default to eth<id>
+# mac is optional and will result in a randomly generated mac
+iface = Interface(
+    id=0,
+    name="eth0",
+    ip4="10.0.0.1",
+    ip4_mask=24,
+    ip6="2001::",
+    ip6_mask=64,
+)
+
+

Leveraging the interface helper class:

+
from core.api.grpc import client
+
+iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
+# node_id is used to get an ip4/ip6 address indexed from within the above prefixes
+# iface_id is required and used exactly for that
+# name is optional and would default to eth<id>
+# mac is optional and will result in a randomly generated mac
+iface_data = iface_helper.create_iface(
+    node_id=1, iface_id=0, name="eth0", mac="00:00:00:00:aa:00"
+)
+
+

Listening to Events

+

Various events that can occur within a session can be listened to.

+

Event types:

+
    +
  • session - events for changes in session state and mobility start/stop/pause
  • +
  • node - events for node movements and icon changes
  • +
  • link - events for link configuration changes and wireless link add/delete
  • +
  • config - configuration events when legacy gui joins a session
  • +
  • exception - alert/error events
  • +
  • file - file events when the legacy gui joins a session
  • +
+
from core.api.grpc import client
+from core.api.grpc.wrappers import EventType
+
+
+def event_listener(event):
+    print(event)
+
+
+# create grpc client and connect
+core = client.CoreGrpcClient()
+core.connect()
+
+# add session
+session = core.create_session()
+
+# provide no events to listen to all events
+core.events(session.id, event_listener)
+
+# provide events to listen to specific events
+core.events(session.id, event_listener, [EventType.NODE])
+
+ +

Links can be configured at the time of creation or during runtime.

+

Currently supported configuration options:

+
    +
  • bandwidth (bps)
  • +
  • delay (us)
  • +
  • duplicate (%)
  • +
  • jitter (us)
  • +
  • loss (%)
  • +
+
from core.api.grpc import client
+from core.api.grpc.wrappers import LinkOptions, Position
+
+# interface helper
+iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
+
+# create grpc client and connect
+core = client.CoreGrpcClient()
+core.connect()
+
+# add session
+session = core.create_session()
+
+# create nodes
+position = Position(x=100, y=100)
+node1 = session.add_node(1, position=position)
+position = Position(x=300, y=100)
+node2 = session.add_node(2, position=position)
+
+# configuring when creating a link
+options = LinkOptions(
+    bandwidth=54_000_000,
+    delay=5000,
+    dup=5,
+    loss=5.5,
+    jitter=0,
+)
+iface1 = iface_helper.create_iface(node1.id, 0)
+iface2 = iface_helper.create_iface(node2.id, 0)
+link = session.add_link(node1=node1, node2=node2, iface1=iface1, iface2=iface2)
+
+# configuring during runtime
+link.options.loss = 10.0
+core.edit_link(session.id, link)
+
+

Peer to Peer Example

+
# required imports
+from core.api.grpc import client
+from core.api.grpc.wrappers import Position
+
+# interface helper
+iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
+
+# create grpc client and connect
+core = client.CoreGrpcClient()
+core.connect()
+
+# add session
+session = core.create_session()
+
+# create nodes
+position = Position(x=100, y=100)
+node1 = session.add_node(1, position=position)
+position = Position(x=300, y=100)
+node2 = session.add_node(2, position=position)
+
+# create link
+iface1 = iface_helper.create_iface(node1.id, 0)
+iface2 = iface_helper.create_iface(node2.id, 0)
+session.add_link(node1=node1, node2=node2, iface1=iface1, iface2=iface2)
+
+# start session
+core.start_session(session)
+
+

Switch/Hub Example

+
# required imports
+from core.api.grpc import client
+from core.api.grpc.wrappers import NodeType, Position
+
+# interface helper
+iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
+
+# create grpc client and connect
+core = client.CoreGrpcClient()
+core.connect()
+
+# add session
+session = core.create_session()
+
+# create nodes
+position = Position(x=200, y=200)
+switch = session.add_node(1, _type=NodeType.SWITCH, position=position)
+position = Position(x=100, y=100)
+node1 = session.add_node(2, position=position)
+position = Position(x=300, y=100)
+node2 = session.add_node(3, position=position)
+
+# create links
+iface1 = iface_helper.create_iface(node1.id, 0)
+session.add_link(node1=node1, node2=switch, iface1=iface1)
+iface1 = iface_helper.create_iface(node2.id, 0)
+session.add_link(node1=node2, node2=switch, iface1=iface1)
+
+# start session
+core.start_session(session)
+
+

WLAN Example

+
# required imports
+from core.api.grpc import client
+from core.api.grpc.wrappers import NodeType, Position
+
+# interface helper
+iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
+
+# create grpc client and connect
+core = client.CoreGrpcClient()
+core.connect()
+
+# add session
+session = core.create_session()
+
+# create nodes
+position = Position(x=200, y=200)
+wlan = session.add_node(1, _type=NodeType.WIRELESS_LAN, position=position)
+position = Position(x=100, y=100)
+node1 = session.add_node(2, model="mdr", position=position)
+position = Position(x=300, y=100)
+node2 = session.add_node(3, model="mdr", position=position)
+
+# create links
+iface1 = iface_helper.create_iface(node1.id, 0)
+session.add_link(node1=node1, node2=wlan, iface1=iface1)
+iface1 = iface_helper.create_iface(node2.id, 0)
+session.add_link(node1=node2, node2=wlan, iface1=iface1)
+
+# set wlan config using a dict mapping currently
+# support values as strings
+wlan.set_wlan(
+    {
+        "range": "280",
+        "bandwidth": "55000000",
+        "delay": "6000",
+        "jitter": "5",
+        "error": "5",
+    }
+)
+
+# start session
+core.start_session(session)
+
+

EMANE Example

+

For EMANE you can import and use one of the existing models and +use its name for configuration.

+

Current models:

+
    +
  • core.emane.ieee80211abg.EmaneIeee80211abgModel
  • +
  • core.emane.rfpipe.EmaneRfPipeModel
  • +
  • core.emane.tdma.EmaneTdmaModel
  • +
  • core.emane.bypass.EmaneBypassModel
  • +
+

Their configurations options are driven dynamically from parsed EMANE manifest files +from the installed version of EMANE.

+

Options and their purpose can be found at the EMANE Wiki.

+

If configuring EMANE global settings or model mac/phy specific settings, any value not provided +will use the defaults. When no configuration is used, the defaults are used.

+
# required imports
+from core.api.grpc import client
+from core.api.grpc.wrappers import NodeType, Position
+from core.emane.models.ieee80211abg import EmaneIeee80211abgModel
+
+# interface helper
+iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
+
+# create grpc client and connect
+core = client.CoreGrpcClient()
+core.connect()
+
+# add session
+session = core.create_session()
+
+# create nodes
+position = Position(x=200, y=200)
+emane = session.add_node(
+    1, _type=NodeType.EMANE, position=position, emane=EmaneIeee80211abgModel.name
+)
+position = Position(x=100, y=100)
+node1 = session.add_node(2, model="mdr", position=position)
+position = Position(x=300, y=100)
+node2 = session.add_node(3, model="mdr", position=position)
+
+# create links
+iface1 = iface_helper.create_iface(node1.id, 0)
+session.add_link(node1=node1, node2=emane, iface1=iface1)
+iface1 = iface_helper.create_iface(node2.id, 0)
+session.add_link(node1=node2, node2=emane, iface1=iface1)
+
+# setting emane specific emane model configuration
+emane.set_emane_model(EmaneIeee80211abgModel.name, {
+    "eventservicettl": "2",
+    "unicastrate": "3",
+})
+
+# start session
+core.start_session(session)
+
+

EMANE Model Configuration:

+
# emane network specific config, set on an emane node
+# this setting applies to all nodes connected
+emane.set_emane_model(EmaneIeee80211abgModel.name, {"unicastrate": "3"})
+
+# node specific config for an individual node connected to an emane network
+node.set_emane_model(EmaneIeee80211abgModel.name, {"unicastrate": "3"})
+
+# node interface specific config for an individual node connected to an emane network
+node.set_emane_model(EmaneIeee80211abgModel.name, {"unicastrate": "3"}, iface_id=0)
+
+

Configuring a Service

+

Services help generate and run bash scripts on nodes for a given purpose.

+

Configuring the files of a service results in a specific hard coded script being +generated, instead of the default scripts, that may leverage dynamic generation.

+

The following features can be configured for a service:

+
    +
  • files - files that will be generated
  • +
  • directories - directories that will be mounted unique to the node
  • +
  • startup - commands to run start a service
  • +
  • validate - commands to run to validate a service
  • +
  • shutdown - commands to run to stop a service
  • +
+

Editing service properties:

+
# configure a service, for a node, for a given session
+node.service_configs[service_name] = NodeServiceData(
+    configs=["file1.sh", "file2.sh"],
+    directories=["/etc/node"],
+    startup=["bash file1.sh"],
+    validate=[],
+    shutdown=[],
+)
+
+

When editing a service file, it must be the name of config +file that the service will generate.

+

Editing a service file:

+
# to edit the contents of a generated file you can specify
+# the service, the file name, and its contents
+file_configs = node.service_file_configs.setdefault(service_name, {})
+file_configs[file_name] = "echo hello world"
+
+

File Examples

+

File versions of the network examples can be found +here. +These examples will create a session using the gRPC API when the core-daemon is running.

+

You can then switch to and attach to these sessions using either of the CORE GUIs.

+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/gui.html b/gui.html new file mode 100644 index 00000000..a7c5c29f --- /dev/null +++ b/gui.html @@ -0,0 +1,2569 @@ + + + + + + + + + + + + + + + + + + + + + + GUI - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+ +
+
+ + + +
+
+ + + + +

CORE GUI

+

+

Overview

+

The GUI is used to draw nodes and network devices on a canvas, linking them +together to create an emulated network session.

+

After pressing the start button, CORE will proceed through these phases, +staying in the runtime phase. After the session is stopped, CORE will +proceed to the data collection phase before tearing down the emulated +state.

+

CORE can be customized to perform any action at each state. See the +Hooks... entry on the Session Menu for details about +when these session states are reached.

+

Prerequisites

+

Beyond installing CORE, you must have the CORE daemon running. This is done +on the command line with either systemd or sysv.

+
# systemd service
+sudo systemctl daemon-reload
+sudo systemctl start core-daemon
+
+# direct invocation
+sudo core-daemon
+
+

GUI Files

+

The GUI will create a directory in your home directory on first run called +~/.coregui. This directory will help layout various files that the GUI may use.

+
    +
  • .coregui/
      +
    • backgrounds/
        +
      • place backgrounds used for display in the GUI
      • +
      +
    • +
    • custom_emane/
        +
      • place to keep custom emane models to use with the core-daemon
      • +
      +
    • +
    • custom_services/
        +
      • place to keep custom services to use with the core-daemon
      • +
      +
    • +
    • icons/
        +
      • icons the GUI uses along with customs icons desired
      • +
      +
    • +
    • mobility/
        +
      • place to keep custom mobility files
      • +
      +
    • +
    • scripts/
        +
      • place to keep core related scripts
      • +
      +
    • +
    • xmls/
        +
      • place to keep saved session xml files
      • +
      +
    • +
    • gui.log
        +
      • log file when running the gui, look here when issues occur for exceptions etc
      • +
      +
    • +
    • config.yaml
        +
      • configuration file used to save/load various gui related settings (custom nodes, layouts, addresses, etc)
      • +
      +
    • +
    +
  • +
+

Modes of Operation

+

The CORE GUI has two primary modes of operation, Edit and Execute +modes. Running the GUI, by typing core-gui with no options, starts in +Edit mode. Nodes are drawn on a blank canvas using the toolbar on the left +and configured from right-click menus or by double-clicking them. The GUI +does not need to be run as root.

+

Once editing is complete, pressing the green Start button instantiates +the topology and enters Execute mode. In execute mode, +the user can interact with the running emulated machines by double-clicking or +right-clicking on them. The editing toolbar disappears and is replaced by an +execute toolbar, which provides tools while running the emulation. Pressing +the red Stop button will destroy the running emulation and return CORE +to Edit mode.

+

Once the emulation is running, the GUI can be closed, and a prompt will appear +asking if the emulation should be terminated. The emulation may be left +running and the GUI can reconnect to an existing session at a later time.

+

The GUI can be run as a normal user on Linux.

+

The GUI currently provides the following options on startup.

+
usage: core-gui [-h] [-l {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [-p]
+                [-s SESSION] [--create-dir]
+
+CORE Python GUI
+
+optional arguments:
+  -h, --help            show this help message and exit
+  -l {DEBUG,INFO,WARNING,ERROR,CRITICAL}, --level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
+                        logging level
+  -p, --proxy           enable proxy
+  -s SESSION, --session SESSION
+                        session id to join
+  --create-dir          create gui directory and exit
+
+

Toolbar

+

The toolbar is a row of buttons that runs vertically along the left side of the +CORE GUI window. The toolbar changes depending on the mode of operation.

+

Editing Toolbar

+

When CORE is in Edit mode (the default), the vertical Editing Toolbar exists on +the left side of the CORE window. Below are brief descriptions for each toolbar +item, starting from the top. Most of the tools are grouped into related +sub-menus, which appear when you click on their group icon.

+ + + + + + + + + + + + + + + + + + + + + + + + + +
IconNameDescription
Selection ToolTool for selecting, moving, configuring nodes.
Start ButtonStarts Execute mode, instantiates the emulation.
LinkAllows network links to be drawn between two nodes by clicking and dragging the mouse.
+

CORE Nodes

+

These nodes will create a new node container and run associated services.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
IconNameDescription
RouterRuns Quagga OSPFv2 and OSPFv3 routing to forward packets.
HostEmulated server machine having a default route, runs SSH server.
PCBasic emulated machine having a default route, runs no processes by default.
MDRRuns Quagga OSPFv3 MDR routing for MANET-optimized routing.
PRouterPhysical router represents a real testbed machine.
+

Network Nodes

+

These nodes are mostly used to create a Linux bridge that serves the +purpose described below.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
IconNameDescription
HubEthernet hub forwards incoming packets to every connected node.
SwitchEthernet switch intelligently forwards incoming packets to attached hosts using an Ethernet address hash table.
Wireless LANWhen routers are connected to this WLAN node, they join a wireless network and an antenna is drawn instead of a connecting line; the WLAN node typically controls connectivity between attached wireless nodes based on the distance between them.
RJ45RJ45 Physical Interface Tool, emulated nodes can be linked to real physical interfaces; using this tool, real networks and devices can be physically connected to the live-running emulation.
TunnelTool allows connecting together more than one CORE emulation using GRE tunnels.
+

Annotation Tools

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
IconNameDescription
MarkerFor drawing marks on the canvas.
OvalFor drawing circles on the canvas that appear in the background.
RectangleFor drawing rectangles on the canvas that appear in the background.
TextFor placing text captions on the canvas.
+

Execution Toolbar

+

When the Start button is pressed, CORE switches to Execute mode, and the Edit +toolbar on the left of the CORE window is replaced with the Execution toolbar +Below are the items on this toolbar, starting from the top.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
IconNameDescription
Stop ButtonStops Execute mode, terminates the emulation, returns CORE to edit mode.
Selection ToolIn Execute mode, the Selection Tool can be used for moving nodes around the canvas, and double-clicking on a node will open a shell window for that node; right-clicking on a node invokes a pop-up menu of run-time options for that node.
MarkerFor drawing freehand lines on the canvas, useful during demonstrations; markings are not saved.
Run ToolThis tool allows easily running a command on all or a subset of all nodes. A list box allows selecting any of the nodes. A text entry box allows entering any command. The command should return immediately, otherwise the display will block awaiting response. The ping command, for example, with no parameters, is not a good idea. The result of each command is displayed in a results box. The first occurrence of the special text "NODE" will be replaced with the node name. The command will not be attempted to run on nodes that are not routers, PCs, or hosts, even if they are selected.
+ +

The menubar runs along the top of the CORE GUI window and provides access to a +variety of features. Some of the menus are detachable, such as the Widgets +menu, by clicking the dashed line at the top.

+

File Menu

+

The File menu contains options for saving and opening saved sessions.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OptionDescription
New SessionThis starts a new session with an empty canvas.
SaveSaves the current topology. If you have not yet specified a file name, the Save As dialog box is invoked.
Save AsInvokes the Save As dialog box for selecting a new .xml file for saving the current configuration in the XML file.
OpenInvokes the File Open dialog box for selecting a new XML file to open.
Recently used filesAbove the Quit menu command is a list of recently use files, if any have been opened. You can clear this list in the Preferences dialog box. You can specify the number of files to keep in this list from the Preferences dialog. Click on one of the file names listed to open that configuration file.
Execute Python ScriptInvokes a File Open dialog box for selecting a Python script to run and automatically connect to. After a selection is made, a Python Script Options dialog box is invoked to allow for command-line options to be added. The Python script must create a new CORE Session and add this session to the daemon's list of sessions in order for this to work.
QuitThe Quit command should be used to exit the CORE GUI. CORE may prompt for termination if you are currently in Execute mode. Preferences and the recently-used files list are saved.
+

Edit Menu

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OptionDescription
PreferencesInvokes the Preferences dialog box.
Custom NodesCustom node creation dialog box.
Undo(Disabled) Attempts to undo the last edit in edit mode.
Redo(Disabled) Attempts to redo an edit that has been undone.
Cut, Copy, Paste, DeleteUsed to cut, copy, paste, and delete a selection. When nodes are pasted, their node numbers are automatically incremented, and existing links are preserved with new IP addresses assigned. Services and their customizations are copied to the new node, but care should be taken as node IP addresses have changed with possibly old addresses remaining in any custom service configurations. Annotations may also be copied and pasted.
+

Canvas Menu

+

The canvas menu provides commands related to the editing canvas.

+ + + + + + + + + + + + + + + + + +
OptionDescription
Size/scaleInvokes a Canvas Size and Scale dialog that allows configuring the canvas size, scale, and geographic reference point. The size controls allow changing the width and height of the current canvas, in pixels or meters. The scale allows specifying how many meters are equivalent to 100 pixels. The reference point controls specify the latitude, longitude, and altitude reference point used to convert between geographic and Cartesian coordinate systems. By clicking the Save as default option, all new canvases will be created with these properties. The default canvas size can also be changed in the Preferences dialog box.
WallpaperUsed for setting the canvas background image.
+

View Menu

+

The View menu features items for toggling on and off their display on the canvas.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OptionDescription
Interface NamesDisplay interface names on links.
IPv4 AddressesDisplay IPv4 addresses on links.
IPv6 AddressesDisplay IPv6 addresses on links.
Node LabelsDisplay node names.
Link LabelsDisplay link labels.
AnnotationsDisplay annotations.
Canvas GridDisplay the canvas grid.
+

Tools Menu

+

The tools menu lists different utility functions.

+ + + + + + + + + + + + + + + + + + + + + + + + + +
OptionDescription
FindDisplay find dialog used for highlighting a node on the canvas.
Auto GridAutomatically layout nodes in a grid.
IP addressesInvokes the IP Addresses dialog box for configuring which IPv4/IPv6 prefixes are used when automatically addressing new interfaces.
MAC addressesInvokes the MAC Addresses dialog box for configuring the starting number used as the lowest byte when generating each interface MAC address. This value should be changed when tunneling between CORE emulations to prevent MAC address conflicts.
+

Widgets Menu

+

Widgets are GUI elements that allow interaction with a running emulation. +Widgets typically automate the running of commands on emulated nodes to report +status information of some type and display this on screen.

+

Periodic Widgets

+

These Widgets are those available from the main Widgets menu. More than one +of these Widgets may be run concurrently. An event loop fires once every second +that the emulation is running. If one of these Widgets is enabled, its periodic +routine will be invoked at this time. Each Widget may have a configuration +dialog box which is also accessible from the Widgets menu.

+

Here are some standard widgets:

+
    +
  • Adjacency - displays router adjacency states for Quagga's OSPFv2 and OSPFv3 + routing protocols. A line is drawn from each router halfway to the router ID + of an adjacent router. The color of the line is based on the OSPF adjacency + state such as Two-way or Full. To learn about the different colors, see the + Configure Adjacency... menu item. The vtysh command is used to + dump OSPF neighbor information. + Only half of the line is drawn because each + router may be in a different adjacency state with respect to the other.
  • +
  • Throughput - displays the kilobits-per-second throughput above each link, + using statistics gathered from each link. If the throughput exceeds a certain + threshold, the link will become highlighted. For wireless nodes which broadcast + data to all nodes in range, the throughput rate is displayed next to the node and + the node will become circled if the threshold is exceeded.
  • +
+

Observer Widgets

+

These Widgets are available from the Observer Widgets submenu of the +Widgets menu, and from the Widgets Tool on the toolbar. Only one Observer Widget may +be used at a time. Mouse over a node while the session is running to pop up +an informational display about that node.

+

Available Observer Widgets include IPv4 and IPv6 routing tables, socket +information, list of running processes, and OSPFv2/v3 neighbor information.

+

Observer Widgets may be edited by the user and rearranged. Choosing +Widgets->Observer Widgets->Edit Observers from the Observer Widget menu will +invoke the Observer Widgets dialog. A list of Observer Widgets is displayed along +with up and down arrows for rearranging the list. Controls are available for +renaming each widget, for changing the command that is run during mouse over, and +for adding and deleting items from the list. Note that specified commands should +return immediately to avoid delays in the GUI display. Changes are saved to a +config.yaml file in the CORE configuration directory.

+

Session Menu

+

The Session Menu has entries for starting, stopping, and managing sessions, +in addition to global options such as node types, comments, hooks, servers, +and options.

+ + + + + + + + + + + + + + + + + + + + + + + + + +
OptionDescription
SessionsInvokes the CORE Sessions dialog box containing a list of active CORE sessions in the daemon. Basic session information such as name, node count, start time, and a thumbnail are displayed. This dialog allows connecting to different sessions, shutting them down, or starting a new session.
ServersInvokes the CORE emulation servers dialog for configuring.
OptionsPresents per-session options, such as the IPv4 prefix to be used, if any, for a control network the ability to preserve the session directory; and an on/off switch for SDT3D support.
HooksInvokes the CORE Session Hooks window where scripts may be configured for a particular session state. The session states are defined in the table below. The top of the window has a list of configured hooks, and buttons on the bottom left allow adding, editing, and removing hook scripts. The new or edit button will open a hook script editing window. A hook script is a shell script invoked on the host (not within a virtual node).
+

Session States

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateDescription
DefinitionUsed by the GUI to tell the backend to clear any state.
ConfigurationWhen the user presses the Start button, node, link, and other configuration data is sent to the backend. This state is also reached when the user customizes a service.
InstantiationAfter configuration data has been sent, just before the nodes are created.
RuntimeAll nodes and networks have been built and are running. (This is the same state at which the previously-named global experiment script was run.)
DatacollectThe user has pressed the Stop button, but before services have been stopped and nodes have been shut down. This is a good time to collect log files and other data from the nodes.
ShutdownAll nodes and networks have been shut down and destroyed.
+

Help Menu

+ + + + + + + + + + + + + + + + + + + + + +
OptionDescription
CORE Github (www)Link to the CORE GitHub page.
CORE Documentation (www)Lnk to the CORE Documentation page.
AboutInvokes the About dialog box for viewing version information.
+

Building Sample Networks

+

Wired Networks

+

Wired networks are created using the Link Tool to draw a link between two +nodes. This automatically draws a red line representing an Ethernet link and +creates new interfaces on network-layer nodes.

+

Double-click on the link to invoke the link configuration dialog box. Here +you can change the Bandwidth, Delay, Loss, and Duplicate +rate parameters for that link. You can also modify the color and width of the +link, affecting its display.

+

Link-layer nodes are provided for modeling wired networks. These do not create +a separate network stack when instantiated, but are implemented using Linux bridging. +These are the hub, switch, and wireless LAN nodes. The hub copies each packet from +the incoming link to every connected link, while the switch behaves more like an +Ethernet switch and keeps track of the Ethernet address of the connected peer, +forwarding unicast traffic only to the appropriate ports.

+

The wireless LAN (WLAN) is covered in the next section.

+

Wireless Networks

+

Wireless networks allow moving nodes around to impact the connectivity between them. Connections between a +pair of nodes is stronger when the nodes are closer while connection is weaker when the nodes are further away. +CORE offers several levels of wireless emulation fidelity, depending on modeling needs and available +hardware.

+
    +
  • WLAN Node
      +
    • uses set bandwidth, delay, and loss
    • +
    • links are enabled or disabled based on a set range
    • +
    • uses the least CPU when moving, but nothing extra when not moving
    • +
    +
  • +
  • Wireless Node
      +
    • uses set bandwidth, delay, and initial loss
    • +
    • loss dynamically changes based on distance between nodes, which can be configured with range parameters
    • +
    • links are enabled or disabled based on a set range
    • +
    • uses more CPU to calculate loss for every movement, but nothing extra when not moving
    • +
    +
  • +
  • EMANE Node
      +
    • uses a physical layer model to account for signal propagation, antenna profile effects and interference + sources in order to provide a realistic environment for wireless experimentation
    • +
    • uses the most CPU for every packet, as complex calculations are used for fidelity
    • +
    • See Wiki for details on general EMANE usage
    • +
    • See CORE EMANE for details on using EMANE in CORE
    • +
    +
  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelTypeSupported Platform(s)FidelityDescription
WLANOn/OffLinuxLowEthernet bridging with nftables
WirelessOn/OffLinuxMediumEthernet bridging with nftables
EMANERFLinuxHighTAP device connected to EMANE emulator with pluggable MAC and PHY radio types
+

Example WLAN Network Setup

+

To quickly build a wireless network, you can first place several router nodes +onto the canvas. If you have the +Quagga MDR software installed, it is +recommended that you use the mdr node type for reduced routing overhead. Next +choose the WLAN from the Link-layer nodes submenu. First set the +desired WLAN parameters by double-clicking the cloud icon. Then you can link +all selected right-clicking on the WLAN and choosing Link to Selected.

+

Linking a router to the WLAN causes a small antenna to appear, but no red link +line is drawn. Routers can have multiple wireless links and both wireless and +wired links (however, you will need to manually configure route +redistribution.) The mdr node type will generate a routing configuration that +enables OSPFv3 with MANET extensions. This is a Boeing-developed extension to +Quagga's OSPFv3 that reduces flooding overhead and optimizes the flooding +procedure for mobile ad-hoc (MANET) networks.

+

The default configuration of the WLAN is set to use the basic range model. Having this model +selected causes core-daemon to calculate the distance between nodes based +on screen pixels. A numeric range in screen pixels is set for the wireless +network using the Range slider. When two wireless nodes are within range of +each other, a green line is drawn between them and they are linked. Two +wireless nodes that are farther than the range pixels apart are not linked. +During Execute mode, users may move wireless nodes around by clicking and +dragging them, and wireless links will be dynamically made or broken.

+

Running Commands within Nodes

+

You can double click a node to bring up a terminal for running shell commands. Within +the terminal you can run anything you like and those commands will be run in context of the node. +For standard CORE nodes, the only thing to keep in mind is that you are using the host file +system and anything you change or do can impact the greater system. By default, your terminal +will open within the nodes home directory for the running session, but it is temporary and +will be removed when the session is stopped.

+

You can also launch GUI based applications from within standard CORE nodes, but you need to +enable xhost access to root.

+
xhost +local:root
+
+

Mobility Scripting

+

CORE has a few ways to script mobility.

+ + + + + + + + + + + + + + + + + + + + + +
OptionDescription
ns-2 scriptThe script specifies either absolute positions or waypoints with a velocity. Locations are given with Cartesian coordinates.
gRPC APIAn external entity can move nodes by leveraging the gRPC API
EMANE eventsSee EMANE for details on using EMANE scripts to move nodes around. Location information is typically given as latitude, longitude, and altitude.
+

For the first method, you can create a mobility script using a text +editor, or using a tool such as BonnMotion, and associate +the script with one of the wireless +using the WLAN configuration dialog box. Click the ns-2 mobility script... +button, and set the mobility script file field in the resulting ns2script +configuration dialog.

+

Here is an example for creating a BonnMotion script for 10 nodes:

+
bm -f sample RandomWaypoint -n 10 -d 60 -x 1000 -y 750
+bm NSFile -f sample
+# use the resulting 'sample.ns_movements' file in CORE
+
+

When the Execute mode is started and one of the WLAN nodes has a mobility +script, a mobility script window will appear. This window contains controls for +starting, stopping, and resetting the running time for the mobility script. The +loop checkbox causes the script to play continuously. The resolution text +box contains the number of milliseconds between each timer event; lower values +cause the mobility to appear smoother but consumes greater CPU time.

+

The format of an ns-2 mobility script looks like:

+
# nodes: 3, max time: 35.000000, max x: 600.00, max y: 600.00
+$node_(2) set X_ 144.0
+$node_(2) set Y_ 240.0
+$node_(2) set Z_ 0.00
+$ns_ at 1.00 "$node_(2) setdest 130.0 280.0 15.0"
+
+

The first three lines set an initial position for node 2. The last line in the +above example causes node 2 to move towards the destination (130, 280) at +speed 15. All units are screen coordinates, with speed in units per second. +The total script time is learned after all nodes have reached their waypoints. +Initially, the time slider in the mobility script dialog will not be +accurate.

+

Examples mobility scripts (and their associated topology files) can be found +in the configs/ directory.

+

Alerts

+

The alerts button is located in the bottom right-hand corner +of the status bar in the CORE GUI. This will change colors to indicate one or +more problems with the running emulation. Clicking on the alerts button will invoke the +alerts dialog.

+

The alerts dialog contains a list of alerts received from +the CORE daemon. An alert has a time, severity level, optional node number, +and source. When the alerts button is red, this indicates one or more fatal +exceptions. An alert with a fatal severity level indicates that one or more +of the basic pieces of emulation could not be created, such as failure to +create a bridge or namespace, or the failure to launch EMANE processes for an +EMANE-based network.

+

Clicking on an alert displays details for that +exceptio. The exception source is a text string +to help trace where the exception occurred; "service:UserDefined" for example, +would appear for a failed validation command with the UserDefined service.

+

A button is available at the bottom of the dialog for clearing the exception +list.

+

Customizing your Topology's Look

+

Several annotation tools are provided for changing the way your topology is +presented. Captions may be added with the Text tool. Ovals and rectangles may +be drawn in the background, helpful for visually grouping nodes together.

+

During live demonstrations the marker tool may be helpful for drawing temporary +annotations on the canvas that may be quickly erased. A size and color palette +appears at the bottom of the toolbar when the marker tool is selected. Markings +are only temporary and are not saved in the topology file.

+

The basic node icons can be replaced with a custom image of your choice. Icons +appear best when they use the GIF or PNG format with a transparent background. +To change a node's icon, double-click the node to invoke its configuration +dialog and click on the button to the right of the node name that shows the +node's current icon.

+

A background image for the canvas may be set using the Wallpaper... option +from the Canvas menu. The image may be centered, tiled, or scaled to fit the +canvas size. An existing terrain, map, or network diagram could be used as a +background, for example, with CORE nodes drawn on top.

+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/hitl.html b/hitl.html new file mode 100644 index 00000000..a7f5a77a --- /dev/null +++ b/hitl.html @@ -0,0 +1,1512 @@ + + + + + + + + + + + + + + + + + + + + + + Hardware In The Loop - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Hardware In The Loop

+

Overview

+

In some cases it may be impossible or impractical to run software using CORE +nodes alone. You may need to bring in external hardware into the network. +CORE's emulated networks run in real time, so they can be connected to live +physical networks. The RJ45 tool and the Tunnel tool help with connecting to +the real world. These tools are available from the Link Layer Nodes menu.

+

When connecting two or more CORE emulations together, MAC address collisions +should be avoided. CORE automatically assigns MAC addresses to interfaces when +the emulation is started, starting with 00:00:00:aa:00:00 and incrementing +the bottom byte. The starting byte should be changed on the second CORE machine +using the Tools->MAC Addresses option the menu.

+

RJ45 Node

+

CORE provides the RJ45 node, which represents a physical +interface within the host that is running CORE. Any real-world network +devices can be connected to the interface and communicate with the CORE nodes in real time.

+

The main drawback is that one physical interface is required for each +connection. When the physical interface is assigned to CORE, it may not be used +for anything else. Another consideration is that the computer or network that +you are connecting to must be co-located with the CORE machine.

+

GUI Usage

+

To place an RJ45 connection, click on the Link Layer Nodes toolbar and select +the RJ45 Node from the options. Click on the canvas, where you would like +the nodes to place. Now click on the Link Tool and draw a link between the RJ45 +and the other node you wish to be connected to. The RJ45 node will display "UNASSIGNED". +Double-click the RJ45 node to assign a physical interface. A list of available +interfaces will be shown, and one may be selected, then selecting Apply.

+
+

Note

+

When you press the Start button to instantiate your topology, the +interface assigned to the RJ45 will be connected to the CORE topology. The +interface can no longer be used by the system.

+
+

Multiple RJ45s with One Interface (VLAN)

+

It is possible to have multiple RJ45 nodes using the same physical interface +by leveraging 802.1x VLANs. This allows for more RJ45 nodes than physical ports +are available, but the (e.g. switching) hardware connected to the physical port +must support the VLAN tagging, and the available bandwidth will be shared.

+

You need to create separate VLAN virtual devices on the Linux host, +and then assign these devices to RJ45 nodes inside of CORE. The VLANing is +actually performed outside of CORE, so when the CORE emulated node receives a +packet, the VLAN tag will already be removed.

+

Here are example commands for creating VLAN devices under Linux:

+
ip link add link eth0 name eth0.1 type vlan id 1
+ip link add link eth0 name eth0.2 type vlan id 2
+ip link add link eth0 name eth0.3 type vlan id 3
+
+

Tunnel Tool

+

The tunnel tool builds GRE tunnels between CORE emulations or other hosts. +Tunneling can be helpful when the number of physical interfaces is limited or +when the peer is located on a different network. In this case a physical interface does +not need to be dedicated to CORE as with the RJ45 tool.

+

The peer GRE tunnel endpoint may be another CORE machine or another +host that supports GRE tunneling. When placing a Tunnel node, initially +the node will display "UNASSIGNED". This text should be replaced with the IP +address of the tunnel peer. This is the IP address of the other CORE machine or +physical machine, not an IP address of another virtual node.

+
+

Note

+

Be aware of possible MTU (Maximum Transmission Unit) issues with GRE devices. +The gretap device has an interface MTU of 1,458 bytes; when joined to a Linux +bridge, the bridge's MTU becomes 1,458 bytes. The Linux bridge will not perform +fragmentation for large packets if other bridge ports have a higher MTU such +as 1,500 bytes.

+
+

The GRE key is used to identify flows with GRE tunneling. This allows multiple +GRE tunnels to exist between that same pair of tunnel peers. A unique number +should be used when multiple tunnels are used with the same peer. When +configuring the peer side of the tunnel, ensure that the matching keys are +used.

+

Example Usage

+

Here are example commands for building the other end of a tunnel on a Linux +machine. In this example, a router in CORE has the virtual address +10.0.0.1/24 and the CORE host machine has the (real) address +198.51.100.34/24. The Linux box +that will connect with the CORE machine is reachable over the (real) network +at 198.51.100.76/24. +The emulated router is linked with the Tunnel Node. In the +Tunnel Node configuration dialog, the address 198.51.100.76 is entered, with +the key set to 1. The gretap interface on the Linux box will be assigned +an address from the subnet of the virtual router node, +10.0.0.2/24.

+
# these commands are run on the tunnel peer
+sudo ip link add gt0 type gretap remote 198.51.100.34 local 198.51.100.76 key 1
+sudo ip addr add 10.0.0.2/24 dev gt0
+sudo ip link set dev gt0 up
+
+

Now the virtual router should be able to ping the Linux machine:

+
# from the CORE router node
+ping 10.0.0.2
+
+

And the Linux machine should be able to ping inside the CORE emulation:

+
# from the tunnel peer
+ping 10.0.0.1
+
+

To debug this configuration, tcpdump can be run on the gretap devices, or +on the physical interfaces on the CORE or Linux machines. Make sure that a +firewall is not blocking the GRE traffic.

+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/index.html b/index.html new file mode 100644 index 00000000..66a1f4d2 --- /dev/null +++ b/index.html @@ -0,0 +1,1342 @@ + + + + + + + + + + + + + + + + + + + + CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

CORE Documentation

+

Introduction

+

CORE (Common Open Research Emulator) is a tool for building virtual networks. As an emulator, CORE builds a +representation of a real computer network that runs in real time, as opposed to simulation, where abstract models are +used. The live-running emulation can be connected to physical networks and routers. It provides an environment for +running real applications and protocols, taking advantage of tools provided by the Linux operating system.

+

CORE is typically used for network and protocol research, demonstrations, application and platform testing, evaluating +networking scenarios, security studies, and increasing the size of physical test networks.

+

Key Features

+
    +
  • Efficient and scalable
  • +
  • Runs applications and protocols without modification
  • +
  • Drag and drop GUI
  • +
  • Highly customizable
  • +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/install.html b/install.html new file mode 100644 index 00000000..4ef59445 --- /dev/null +++ b/install.html @@ -0,0 +1,1987 @@ + + + + + + + + + + + + + + + + + + + + + + Overview - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + +

Installation

+
+

Warning

+

If Docker is installed, the default iptable rules will block CORE traffic

+
+

Overview

+

CORE currently supports and provides the following installation options, with the package +option being preferred.

+ +

Requirements

+

Any computer capable of running Linux should be able to run CORE. Since the physical machine will be hosting numerous +containers, as a general rule you should select a machine having as much RAM and CPU resources as possible.

+
    +
  • Linux Kernel v3.3+
  • +
  • iproute2 4.5+ is a requirement for bridge related commands
  • +
  • nftables compatible kernel and nft command line tool
  • +
+

Supported Linux Distributions

+

Plan is to support recent Ubuntu and CentOS LTS releases.

+

Verified:

+
    +
  • Ubuntu - 18.04, 20.04, 22.04
  • +
  • CentOS - 7.8
  • +
+

Files

+

The following is a list of files that would be installed after installation.

+
    +
  • executables
      +
    • <prefix>/bin/{vcmd, vnode}
    • +
    • can be adjusted using script based install , package will be /usr
    • +
    +
  • +
  • python files
      +
    • virtual environment /opt/core/venv
    • +
    • local install will be local to the python version used
        +
      • python3 -c "import core; print(core.__file__)"
      • +
      +
    • +
    • scripts {core-daemon, core-cleanup, etc}
        +
      • virtualenv /opt/core/venv/bin
      • +
      • local /usr/local/bin
      • +
      +
    • +
    +
  • +
  • configuration files
      +
    • /etc/core/{core.conf, logging.conf}
    • +
    +
  • +
  • ospf mdr repository files when using script based install
      +
    • <repo>/../ospf-mdr
    • +
    +
  • +
+

Installed Scripts

+

The following python scripts are provided.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescription
core-cleanuptool to help removed lingering core created containers, bridges, directories
core-clitool to query, open xml files, and send commands using gRPC
core-daemonruns the backed core server providing a gRPC API
core-guistarts GUI
core-pythonprovides a convenience for running the core python virtual environment
core-route-monitortool to help monitor traffic across nodes and feed that to SDT
core-service-updatetool to update automate modifying a legacy service to match current naming
+

Upgrading from Older Release

+

Please make sure to uninstall any previous installations of CORE cleanly +before proceeding to install.

+

Clearing out a current install from 7.0.0+, making sure to provide options +used for install (-l or -p).

+
cd <CORE_REPO>
+inv uninstall <options>
+
+

Previous install was built from source for CORE release older than 7.0.0:

+
cd <CORE_REPO>
+sudo make uninstall
+make clean
+./bootstrap.sh clean
+
+

Installed from previously built packages:

+
# centos
+sudo yum remove core
+# ubuntu
+sudo apt remove core
+
+

Installation Examples

+

The below links will take you to sections providing complete examples for installing +CORE and related utilities on fresh installations. Otherwise, a breakdown for installing +different components and the options available are detailed below.

+ +

Package Based Install

+

Starting with 9.0.0 there are pre-built rpm/deb packages. You can retrieve the +rpm/deb package from releases page.

+

The built packages will require and install system level dependencies, as well as running +a post install script to install the provided CORE python wheel. A similar uninstall script +is ran when uninstalling and would require the same options as given, during the install.

+
+

Note

+

PYTHON defaults to python3 for installs below, CORE requires python3.9+, pip, +tk compatibility for python gui, and venv for virtual environments

+
+

Examples for install:

+
# recommended to upgrade to the latest version of pip before installation
+# in python, can help avoid building from source issues
+sudo <python> -m pip install --upgrade pip
+# install vcmd/vnoded, system dependencies,
+# and core python into a venv located at /opt/core/venv
+sudo <yum/apt> install -y ./<package>
+# disable the venv and install to python directly
+sudo NO_VENV=1 <yum/apt> install -y ./<package>
+# change python executable used to install for venv or direct installations
+sudo PYTHON=python3.9 <yum/apt> install -y ./<package>
+# disable venv and change python executable
+sudo NO_VENV=1 PYTHON=python3.9 <yum/apt> install -y ./<package>
+# skip installing the python portion entirely, as you plan to carry this out yourself
+# core python wheel is located at /opt/core/core-<version>-py3-none-any.whl
+sudo NO_PYTHON=1 <yum/apt> install -y ./<package>
+# install python wheel into python of your choosing
+sudo <python> -m pip install /opt/core/core-<version>-py3-none-any.whl
+
+

Example for removal, requires using the same options as install:

+
# remove a standard install
+sudo <yum/apt> remove core
+# remove a local install
+sudo NO_VENV=1 <yum/apt> remove core
+# remove install using alternative python
+sudo PYTHON=python3.9 <yum/apt> remove core
+# remove install using alternative python and local install
+sudo NO_VENV=1 PYTHON=python3.9 <yum/apt> remove core
+# remove install and skip python uninstall
+sudo NO_PYTHON=1 <yum/apt> remove core
+
+

Installing OSPF MDR

+

You will need to manually install OSPF MDR for routing nodes, since this is not +provided by the package.

+
git clone https://github.com/USNavalResearchLaboratory/ospf-mdr.git
+cd ospf-mdr
+./bootstrap.sh
+./configure --disable-doc --enable-user=root --enable-group=root \
+  --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh \
+  --localstatedir=/var/run/quagga
+make -j$(nproc)
+sudo make install
+
+

When done see Post Install.

+

Script Based Install

+

The script based installation will install system level dependencies, python library and +dependencies, as well as dependencies for building CORE.

+

The script based install also automatically builds and installs OSPF MDR, used by default +on routing nodes. This can optionally be skipped.

+

Installaion will carry out the following steps:

+
    +
  • installs system dependencies for building core
  • +
  • builds vcmd/vnoded and python grpc files
  • +
  • installs core into poetry managed virtual environment or locally, if flag is passed
  • +
  • installs systemd service pointing to appropriate python location based on install type
  • +
  • clone/build/install working version of OPSF MDR
  • +
+
+

Note

+

Installing locally comes with its own risks, it can result it potential +dependency conflicts with system package manager installed python dependencies

+
+
+

Note

+

Provide a prefix that will be found on path when running as sudo, +if the default prefix /usr/local will not be valid

+
+

The following tools will be leveraged during installation:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
ToolDescription
pipused to install pipx
pipxused to install standalone python tools (invoke, poetry)
invokeused to run provided tasks (install, uninstall, reinstall, etc)
poetryused to install python virtual environment or building a python wheel
+

First we will need to clone and navigate to the CORE repo.

+
# clone CORE repo
+git clone https://github.com/coreemu/core.git
+cd core
+
+# install dependencies to run installation task
+./setup.sh
+# skip installing system packages, due to using python built from source
+NO_SYSTEM=1 ./setup.sh
+
+# run the following or open a new terminal
+source ~/.bashrc
+
+# Ubuntu
+inv install
+# CentOS
+inv install -p /usr
+# optionally skip python system packages
+inv install --no-python
+# optionally skip installing ospf mdr
+inv install --no-ospf
+
+# install command options
+Usage: inv[oke] [--core-opts] install [--options] [other tasks here ...]
+
+Docstring:
+  install core, poetry, scripts, service, and ospf mdr
+
+Options:
+  -d, --dev                          install development mode
+  -i STRING, --install-type=STRING   used to force an install type, can be one of the following (redhat, debian)
+  -l, --local                        determines if core will install to local system, default is False
+  -n, --no-python                    avoid installing python system dependencies
+  -o, --[no-]ospf                    disable ospf installation
+  -p STRING, --prefix=STRING         prefix where scripts are installed, default is /usr/local
+  -v, --verbose
+
+

When done see Post Install.

+

Unsupported Linux Distribution

+

For unsupported OSs you could attempt to do the following to translate +an installation to your use case.

+
    +
  • make sure you have python3.9+ with venv support
  • +
  • make sure you have python3 invoke available to leverage <repo>/tasks.py
  • +
+
# this will print the commands that would be ran for a given installation
+# type without actually running them, they may help in being used as
+# the basis for translating to your OS
+inv install --dry -v -p <prefix> -i <install type>
+
+

Dockerfile Based Install

+

You can leverage one of the provided Dockerfiles, to run and launch CORE within a Docker container.

+

Since CORE nodes will leverage software available within the system for a given use case, +make sure to update and build the Dockerfile with desired software.

+
# clone core
+git clone https://github.com/coreemu/core.git
+cd core
+# build image
+sudo docker build -t core -f dockerfiles/Dockerfile.<centos,ubuntu> .
+# start container
+sudo docker run -itd --name core -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --privileged core
+# enable xhost access to the root user
+xhost +local:root
+# launch core-gui
+sudo docker exec -it core core-gui
+
+

When done see Post Install.

+

Installing EMANE

+
+

Note

+

Installing EMANE for the virtual environment is known to work for 1.21+

+
+

The recommended way to install EMANE is using prebuilt packages, otherwise +you can follow their instructions for installing from source. Installation +information can be found here.

+

There is an invoke task to help install the EMANE bindings into the CORE virtual +environment, when needed. An example for running the task is below and the version +provided should match the version of the packages installed.

+

You will also need to make sure, you are providing the correct python binary where CORE +is being used.

+

Also, these EMANE bindings need to be built using protoc 3.19+. So make sure +that is available and being picked up on PATH properly.

+

Examples for building and installing EMANE python bindings for use in CORE:

+
# if your system does not have protoc 3.19+
+wget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip
+mkdir protoc
+unzip protoc-3.19.6-linux-x86_64.zip -d protoc
+git clone https://github.com/adjacentlink/emane.git
+cd emane
+git checkout v1.3.3
+./autogen.sh
+PYTHON=/opt/core/venv/bin/python ./configure --prefix=/usr
+cd src/python
+PATH=/opt/protoc/bin:$PATH make
+/opt/core/venv/bin/python -m pip install .
+
+# when your system has protoc 3.19+
+cd <CORE_REPO>
+# example version tag v1.3.3
+# overriding python used to leverage the default virtualenv install
+PYTHON=/opt/core/venv/bin/python inv install-emane -e <version tag>
+# local install that uses whatever python3 refers to
+inv install-emane -e <version tag>
+
+

Post Install

+

After installation completes you are now ready to run CORE.

+

Resolving Docker Issues

+

If you have Docker installed, by default it will change the iptables +forwarding chain to drop packets, which will cause issues for CORE traffic.

+

You can temporarily resolve the issue with the following command:

+
sudo iptables --policy FORWARD ACCEPT
+
+

Alternatively, you can configure Docker to avoid doing this, but will likely +break normal Docker networking usage. Using the setting below will require +a restart.

+

Place the file contents below in /etc/docker/docker.json

+
{
+  "iptables": false
+}
+
+

Resolving Path Issues

+

One problem running CORE you may run into, using the virtual environment or locally +can be issues related to your path.

+

To add support for your user to run scripts from the virtual environment:

+
# can add to ~/.bashrc
+export PATH=$PATH:/opt/core/venv/bin
+
+

This will not solve the path issue when running as sudo, so you can do either +of the following to compensate.

+
# run command passing in the right PATH to pickup from the user running the command
+sudo env PATH=$PATH core-daemon
+
+# add an alias to ~/.bashrc or something similar
+alias sudop='sudo env PATH=$PATH'
+# now you can run commands like so
+sudop core-daemon
+
+

Running CORE

+

The following assumes I have resolved PATH issues and setup the sudop alias.

+
# in one terminal run the server daemon using the alias above
+sudop core-daemon
+# in another terminal run the gui client
+core-gui
+
+

Enabling Service

+

After installation, the core service is not enabled by default. If you desire to use the +service, run the following commands.

+
sudo systemctl enable core-daemon
+sudo systemctl start core-daemon
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/install_centos.html b/install_centos.html new file mode 100644 index 00000000..91756f7c --- /dev/null +++ b/install_centos.html @@ -0,0 +1,1488 @@ + + + + + + + + + + + + + + + + + + + + + + CentOS - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Install CentOS

+

Overview

+

Below is a detailed path for installing CORE and related tooling on a fresh +CentOS 7 install. Both of the examples below will install CORE into its +own virtual environment located at /opt/core/venv. Both examples below +also assume using ~/Documents as the working directory.

+

Script Install

+

This section covers step by step commands that can be used to install CORE using +the script based installation path.

+
# install system packages
+sudo yum -y update
+sudo yum install -y git sudo wget tzdata unzip libpcap-devel libpcre3-devel \
+    libxml2-devel protobuf-devel unzip uuid-devel tcpdump make epel-release
+sudo yum-builddep -y python3
+
+# install python3.9
+cd ~/Documents
+wget https://www.python.org/ftp/python/3.9.15/Python-3.9.15.tgz
+tar xf Python-3.9.15.tgz
+cd Python-3.9.15
+./configure --enable-optimizations --with-ensurepip=install
+sudo make -j$(nproc) altinstall
+python3.9 -m pip install --upgrade pip
+
+# install core
+cd ~/Documents
+git clone https://github.com/coreemu/core
+cd core
+NO_SYSTEM=1 PYTHON=/usr/local/bin/python3.9 ./setup.sh
+source ~/.bashrc
+PYTHON=python3.9 inv install -p /usr --no-python
+
+# install emane
+cd ~/Documents
+wget -q https://adjacentlink.com/downloads/emane/emane-1.3.3-release-1.el7.x86_64.tar.gz
+tar xf emane-1.3.3-release-1.el7.x86_64.tar.gz
+cd emane-1.3.3-release-1/rpms/el7/x86_64
+sudo yum install -y ./openstatistic*.rpm ./emane*.rpm ./python3-emane_*.rpm
+
+# install emane python bindings into CORE virtual environment
+cd ~/Documents
+wget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip
+mkdir protoc
+unzip protoc-3.19.6-linux-x86_64.zip -d protoc
+git clone https://github.com/adjacentlink/emane.git
+cd emane
+git checkout v1.3.3
+./autogen.sh
+PYTHON=/opt/core/venv/bin/python ./configure --prefix=/usr
+cd src/python
+PATH=~/Documents/protoc/bin:$PATH make
+sudo /opt/core/venv/bin/python -m pip install .
+
+

Package Install

+

This section covers step by step commands that can be used to install CORE using +the package based installation path. This will require downloading a package from the release +page, to use during the install CORE step below.

+
# install system packages
+sudo yum -y update
+sudo yum install -y git sudo wget tzdata unzip libpcap-devel libpcre3-devel libxml2-devel \
+    protobuf-devel unzip uuid-devel tcpdump automake gawk libreadline-devel libtool \
+    pkg-config make
+sudo yum-builddep -y python3
+
+# install python3.9
+cd ~/Documents
+wget https://www.python.org/ftp/python/3.9.15/Python-3.9.15.tgz
+tar xf Python-3.9.15.tgz
+cd Python-3.9.15
+./configure --enable-optimizations --with-ensurepip=install
+sudo make -j$(nproc) altinstall
+python3.9 -m pip install --upgrade pip
+
+# install core
+cd ~/Documents
+sudo PYTHON=python3.9 yum install -y ./core_*.rpm
+
+# install ospf mdr
+cd ~/Documents
+git clone https://github.com/USNavalResearchLaboratory/ospf-mdr.git
+cd ospf-mdr
+./bootstrap.sh
+./configure --disable-doc --enable-user=root --enable-group=root \
+    --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh \
+    --localstatedir=/var/run/quagga
+make -j$(nproc)
+sudo make install
+
+# install emane
+cd ~/Documents
+wget -q https://adjacentlink.com/downloads/emane/emane-1.3.3-release-1.el7.x86_64.tar.gz
+tar xf emane-1.3.3-release-1.el7.x86_64.tar.gz
+cd emane-1.3.3-release-1/rpms/el7/x86_64
+sudo yum install -y ./openstatistic*.rpm ./emane*.rpm ./python3-emane_*.rpm
+
+# install emane python bindings into CORE virtual environment
+cd ~/Documents
+wget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip
+mkdir protoc
+unzip protoc-3.19.6-linux-x86_64.zip -d protoc
+git clone https://github.com/adjacentlink/emane.git
+cd emane
+git checkout v1.3.3
+./autogen.sh
+PYTHON=/opt/core/venv/bin/python ./configure --prefix=/usr
+cd src/python
+PATH=~/Documents/protoc/bin:$PATH make
+sudo /opt/core/venv/bin/python -m pip install .
+
+

Setup PATH

+

The CORE virtual environment and related scripts will not be found on your PATH, +so some adjustments needs to be made.

+

To add support for your user to run scripts from the virtual environment:

+
# can add to ~/.bashrc
+export PATH=$PATH:/opt/core/venv/bin
+
+

This will not solve the path issue when running as sudo, so you can do either +of the following to compensate.

+
# run command passing in the right PATH to pickup from the user running the command
+sudo env PATH=$PATH core-daemon
+
+# add an alias to ~/.bashrc or something similar
+alias sudop='sudo env PATH=$PATH'
+# now you can run commands like so
+sudop core-daemon
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/install_ubuntu.html b/install_ubuntu.html new file mode 100644 index 00000000..75089c57 --- /dev/null +++ b/install_ubuntu.html @@ -0,0 +1,1460 @@ + + + + + + + + + + + + + + + + + + + + + + Ubuntu - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Install Ubuntu

+

Overview

+

Below is a detailed path for installing CORE and related tooling on a fresh +Ubuntu 22.04 installation. Both of the examples below will install CORE into its +own virtual environment located at /opt/core/venv. Both examples below +also assume using ~/Documents as the working directory.

+

Script Install

+

This section covers step by step commands that can be used to install CORE using +the script based installation path.

+
# install system packages
+sudo apt-get update -y
+sudo apt-get install -y ca-certificates git sudo wget tzdata libpcap-dev libpcre3-dev \
+    libprotobuf-dev libxml2-dev protobuf-compiler unzip uuid-dev iproute2 iputils-ping \
+    tcpdump
+
+# install core
+cd ~/Documents
+git clone https://github.com/coreemu/core
+cd core
+./setup.sh
+source ~/.bashrc
+inv install
+
+# install emane
+cd ~/Documents
+wget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip
+mkdir protoc
+unzip protoc-3.19.6-linux-x86_64.zip -d protoc
+git clone https://github.com/adjacentlink/emane.git
+cd emane
+./autogen.sh
+./configure --prefix=/usr
+make -j$(nproc)
+sudo make install
+cd src/python
+make clean
+PATH=~/Documents/protoc/bin:$PATH make
+sudo /opt/core/venv/bin/python -m pip install .
+
+

Package Install

+

This section covers step by step commands that can be used to install CORE using +the package based installation path. This will require downloading a package from the release +page, to use during the install CORE step below.

+
# install system packages
+sudo apt-get update -y
+sudo apt-get install -y ca-certificates python3 python3-tk python3-pip python3-venv \
+    libpcap-dev libpcre3-dev libprotobuf-dev libxml2-dev protobuf-compiler unzip \
+    uuid-dev automake gawk git wget libreadline-dev libtool pkg-config g++ make \
+    iputils-ping tcpdump
+
+# install core
+cd ~/Documents
+sudo apt-get install -y ./core_*.deb
+
+# install ospf mdr
+cd ~/Documents
+git clone https://github.com/USNavalResearchLaboratory/ospf-mdr.git
+cd ospf-mdr
+./bootstrap.sh
+./configure --disable-doc --enable-user=root --enable-group=root \
+    --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh \
+    --localstatedir=/var/run/quagga
+make -j$(nproc)
+sudo make install
+
+# install emane
+cd ~/Documents
+wget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip
+mkdir protoc
+unzip protoc-3.19.6-linux-x86_64.zip -d protoc
+git clone https://github.com/adjacentlink/emane.git
+cd emane
+./autogen.sh
+./configure --prefix=/usr
+make -j$(nproc)
+sudo make install
+cd src/python
+make clean
+PATH=~/Documents/protoc/bin:$PATH make
+sudo /opt/core/venv/bin/python -m pip install .
+
+

Setup PATH

+

The CORE virtual environment and related scripts will not be found on your PATH, +so some adjustments needs to be made.

+

To add support for your user to run scripts from the virtual environment:

+
# can add to ~/.bashrc
+export PATH=$PATH:/opt/core/venv/bin
+
+

This will not solve the path issue when running as sudo, so you can do either +of the following to compensate.

+
# run command passing in the right PATH to pickup from the user running the command
+sudo env PATH=$PATH core-daemon
+
+# add an alias to ~/.bashrc or something similar
+alias sudop='sudo env PATH=$PATH'
+# now you can run commands like so
+sudop core-daemon
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/lxc.html b/lxc.html new file mode 100644 index 00000000..2b382647 --- /dev/null +++ b/lxc.html @@ -0,0 +1,1433 @@ + + + + + + + + + + + + + + + + + + + + + + LXC - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

LXC Support

+

Overview

+

LXC nodes are provided by way of LXD to create nodes using predefined +images and provide file system separation.

+

Installation

+

Debian Systems

+
sudo snap install lxd
+
+

Configuration

+

Initialize LXD and say no to adding a default bridge.

+
sudo lxd init
+
+

Group Setup

+

To use LXC nodes within the python GUI, you will need to make sure the user running the GUI is a member of the +lxd group.

+
# add group if does not exist
+sudo groupadd lxd
+
+# add user to group
+sudo usermod -aG lxd $USER
+
+# to get this change to take effect, log out and back in or run the following
+newgrp lxd
+
+

Tools and Versions Tested With

+
    +
  • LXD 3.14
  • +
  • nsenter from util-linux 2.31.1
  • +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/nodetypes.html b/nodetypes.html new file mode 100644 index 00000000..deff74bf --- /dev/null +++ b/nodetypes.html @@ -0,0 +1,1418 @@ + + + + + + + + + + + + + + + + + + + + + + Overview - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Node Types

+

Overview

+

Different node types can be used within CORE, each with their own +tradeoffs and functionality.

+

CORE Nodes

+

CORE nodes are the standard node type typically used in CORE. They are +backed by Linux network namespaces. They use very little system resources +in order to emulate a network. They do however share the hosts file system +as they do not get their own. CORE nodes will have a directory uniquely +created for them as a place to keep their files and mounted directories +(/tmp/pycore.<session id>/<node name.conf), +which will usually be wiped and removed upon shutdown.

+

Docker Nodes

+

Docker nodes provide a convenience for running nodes using predefind images +and filesystems that CORE nodes do not provide. Details for using Docker +nodes can be found here.

+

LXC Nodes

+

LXC nodes provide a convenience for running nodes using predefind images +and filesystems that CORE nodes do not provide. Details for using LXC +nodes can be found here.

+

Physical Nodes

+

The physical machine type is used for nodes that represent a real Linux-based +machine that will participate in the emulated network scenario. This is +typically used, for example, to incorporate racks of server machines from an +emulation testbed. A physical node is one that is running the CORE daemon +(core-daemon), but will not be further partitioned into containers. +Services that are run on the physical node do not run in an isolated +environment, but directly on the operating system.

+

Physical nodes must be assigned to servers, the same way nodes are assigned to +emulation servers with Distributed Emulation. The list of available physical +nodes currently shares the same dialog box and list as the emulation servers, +accessed using the Emulation Servers... entry from the Session menu.

+

Support for physical nodes is under development and may be improved in future +releases. Currently, when any node is linked to a physical node, a dashed line +is drawn to indicate network tunneling. A GRE tunneling interface will be +created on the physical node and used to tunnel traffic to and from the +emulated world.

+

Double-clicking on a physical node during runtime opens a terminal with an +SSH shell to that node. Users should configure public-key SSH login as done +with emulation servers.

+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/performance.html b/performance.html new file mode 100644 index 00000000..6d92a1e8 --- /dev/null +++ b/performance.html @@ -0,0 +1,1382 @@ + + + + + + + + + + + + + + + + + + + + + + Performance - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

CORE Performance

+

Overview

+

The top question about the performance of CORE is often how many nodes can it +handle? The answer depends on several factors:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FactorPerformance Impact
Hardwarethe number and speed of processors in the computer, the available processor cache, RAM memory, and front-side bus speed may greatly affect overall performance.
Operating system versiondistribution of Linux and the specific kernel versions used will affect overall performance.
Active processesall nodes share the same CPU resources, so if one or more nodes is performing a CPU-intensive task, overall performance will suffer.
Network trafficthe more packets that are sent around the virtual network increases the amount of CPU usage.
GUI usagewidgets that run periodically, mobility scenarios, and other GUI interactions generally consume CPU cycles that may be needed for emulation.
+

On a typical single-CPU Xeon 3.0GHz server machine with 2GB RAM running Linux, +we have found it reasonable to run 30-75 nodes running OSPFv2 and OSPFv3 +routing. On this hardware CORE can instantiate 100 or more nodes, but at +that point it becomes critical as to what each of the nodes is doing.

+

Because this software is primarily a network emulator, the more appropriate +question is how much network traffic can it handle? On the same 3.0GHz +server described above, running Linux, about 300,000 packets-per-second can +be pushed through the system. The number of hops and the size of the packets +is less important. The limiting factor is the number of times that the +operating system needs to handle a packet. The 300,000 pps figure represents +the number of times the system as a whole needed to deal with a packet. As +more network hops are added, this increases the number of context switches +and decreases the throughput seen on the full length of the network path.

+
+

Note

+

The right question to be asking is "how much traffic?", not +"how many nodes?".

+
+

For a more detailed study of performance in CORE, refer to the following +publications:

+
    +
  • J. Ahrenholz, T. Goff, and B. Adamson, Integration of the CORE and EMANE + Network Emulators, Proceedings of the IEEE Military Communications Conference 2011, November 2011.
  • +
  • Ahrenholz, J., Comparison of CORE Network Emulation Platforms, Proceedings + of the IEEE Military Communications Conference 2010, pp. 864-869, November 2010.
  • +
  • J. Ahrenholz, C. Danilov, T. Henderson, and J.H. Kim, CORE: A real-time + network emulator, Proceedings of IEEE MILCOM Conference, 2008.
  • +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/python.html b/python.html new file mode 100644 index 00000000..7ba0b8f5 --- /dev/null +++ b/python.html @@ -0,0 +1,1886 @@ + + + + + + + + + + + + + + + + + + + + + + Python - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+ +
+
+ + + +
+
+ + + + +

Python API

+

Overview

+

Writing your own Python scripts offers a rich programming environment with +complete control over all aspects of the emulation.

+

The scripts need to be ran with root privileges because they create new network +namespaces. In general, a CORE Python script does not connect to the CORE +daemon, in fact the core-daemon is just another Python script that uses +the CORE Python modules and exchanges messages with the GUI.

+

Examples

+

Node Models

+

When creating nodes of type core.nodes.base.CoreNode these are the default models +and the services they map to.

+
    +
  • mdr
      +
    • zebra, OSPFv3MDR, IPForward
    • +
    +
  • +
  • PC
      +
    • DefaultRoute
    • +
    +
  • +
  • router
      +
    • zebra, OSPFv2, OSPFv3, IPForward
    • +
    +
  • +
  • host
      +
    • DefaultRoute, SSH
    • +
    +
  • +
+

Interface Helper

+

There is an interface helper class that can be leveraged for convenience +when creating interface data for nodes. Alternatively one can manually create +a core.emulator.data.InterfaceData class instead with appropriate information.

+

Manually creating interface data:

+
from core.emulator.data import InterfaceData
+
+# id is optional and will set to the next available id
+# name is optional and will default to eth<id>
+# mac is optional and will result in a randomly generated mac
+iface_data = InterfaceData(
+    id=0,
+    name="eth0",
+    ip4="10.0.0.1",
+    ip4_mask=24,
+    ip6="2001::",
+    ip6_mask=64,
+)
+
+

Leveraging the interface prefixes helper class:

+
from core.emulator.data import IpPrefixes
+
+ip_prefixes = IpPrefixes(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
+# node is used to get an ip4/ip6 address indexed from within the above prefixes
+# name is optional and would default to eth<id>
+# mac is optional and will result in a randomly generated mac
+iface_data = ip_prefixes.create_iface(
+    node=node, name="eth0", mac="00:00:00:00:aa:00"
+)
+
+

Listening to Events

+

Various events that can occur within a session can be listened to.

+

Event types:

+
    +
  • session - events for changes in session state and mobility start/stop/pause
  • +
  • node - events for node movements and icon changes
  • +
  • link - events for link configuration changes and wireless link add/delete
  • +
  • config - configuration events when legacy gui joins a session
  • +
  • exception - alert/error events
  • +
  • file - file events when the legacy gui joins a session
  • +
+
def event_listener(event):
+    print(event)
+
+
+# add an event listener to event type you want to listen to
+# each handler will receive an object unique to that type
+session.event_handlers.append(event_listener)
+session.exception_handlers.append(event_listener)
+session.node_handlers.append(event_listener)
+session.link_handlers.append(event_listener)
+session.file_handlers.append(event_listener)
+session.config_handlers.append(event_listener)
+
+ +

Links can be configured at the time of creation or during runtime.

+

Currently supported configuration options:

+
    +
  • bandwidth (bps)
  • +
  • delay (us)
  • +
  • dup (%)
  • +
  • jitter (us)
  • +
  • loss (%)
  • +
+
from core.emulator.data import LinkOptions
+
+# configuring when creating a link
+options = LinkOptions(
+    bandwidth=54_000_000,
+    delay=5000,
+    dup=5,
+    loss=5.5,
+    jitter=0,
+)
+session.add_link(n1_id, n2_id, iface1_data, iface2_data, options)
+
+# configuring during runtime
+session.update_link(n1_id, n2_id, iface1_id, iface2_id, options)
+
+

Peer to Peer Example

+
# required imports
+from core.emulator.coreemu import CoreEmu
+from core.emulator.data import IpPrefixes
+from core.emulator.enumerations import EventTypes
+from core.nodes.base import CoreNode, Position
+
+# ip nerator for example
+ip_prefixes = IpPrefixes(ip4_prefix="10.0.0.0/24")
+
+# create emulator instance for creating sessions and utility methods
+coreemu = CoreEmu()
+session = coreemu.create_session()
+
+# must be in configuration state for nodes to start, when using "node_add" below
+session.set_state(EventTypes.CONFIGURATION_STATE)
+
+# create nodes
+position = Position(x=100, y=100)
+n1 = session.add_node(CoreNode, position=position)
+position = Position(x=300, y=100)
+n2 = session.add_node(CoreNode, position=position)
+
+# link nodes together
+iface1 = ip_prefixes.create_iface(n1)
+iface2 = ip_prefixes.create_iface(n2)
+session.add_link(n1.id, n2.id, iface1, iface2)
+
+# start session
+session.instantiate()
+
+# do whatever you like here
+input("press enter to shutdown")
+
+# stop session
+session.shutdown()
+
+

Switch/Hub Example

+
# required imports
+from core.emulator.coreemu import CoreEmu
+from core.emulator.data import IpPrefixes
+from core.emulator.enumerations import EventTypes
+from core.nodes.base import CoreNode, Position
+from core.nodes.network import SwitchNode
+
+# ip nerator for example
+ip_prefixes = IpPrefixes(ip4_prefix="10.0.0.0/24")
+
+# create emulator instance for creating sessions and utility methods
+coreemu = CoreEmu()
+session = coreemu.create_session()
+
+# must be in configuration state for nodes to start, when using "node_add" below
+session.set_state(EventTypes.CONFIGURATION_STATE)
+
+# create switch
+position = Position(x=200, y=200)
+switch = session.add_node(SwitchNode, position=position)
+
+# create nodes
+position = Position(x=100, y=100)
+n1 = session.add_node(CoreNode, position=position)
+position = Position(x=300, y=100)
+n2 = session.add_node(CoreNode, position=position)
+
+# link nodes to switch
+iface1 = ip_prefixes.create_iface(n1)
+session.add_link(n1.id, switch.id, iface1)
+iface1 = ip_prefixes.create_iface(n2)
+session.add_link(n2.id, switch.id, iface1)
+
+# start session
+session.instantiate()
+
+# do whatever you like here
+input("press enter to shutdown")
+
+# stop session
+session.shutdown()
+
+

WLAN Example

+
# required imports
+from core.emulator.coreemu import CoreEmu
+from core.emulator.data import IpPrefixes
+from core.emulator.enumerations import EventTypes
+from core.location.mobility import BasicRangeModel
+from core.nodes.base import CoreNode, Position
+from core.nodes.network import WlanNode
+
+# ip nerator for example
+ip_prefixes = IpPrefixes(ip4_prefix="10.0.0.0/24")
+
+# create emulator instance for creating sessions and utility methods
+coreemu = CoreEmu()
+session = coreemu.create_session()
+
+# must be in configuration state for nodes to start, when using "node_add" below
+session.set_state(EventTypes.CONFIGURATION_STATE)
+
+# create wlan
+position = Position(x=200, y=200)
+wlan = session.add_node(WlanNode, position=position)
+
+# create nodes
+options = CoreNode.create_options()
+options.model = "mdr"
+position = Position(x=100, y=100)
+n1 = session.add_node(CoreNode, position=position, options=options)
+position = Position(x=300, y=100)
+n2 = session.add_node(CoreNode, position=position, options=options)
+
+# configuring wlan
+session.mobility.set_model_config(wlan.id, BasicRangeModel.name, {
+    "range": "280",
+    "bandwidth": "55000000",
+    "delay": "6000",
+    "jitter": "5",
+    "error": "5",
+})
+
+# link nodes to wlan
+iface1 = ip_prefixes.create_iface(n1)
+session.add_link(n1.id, wlan.id, iface1)
+iface1 = ip_prefixes.create_iface(n2)
+session.add_link(n2.id, wlan.id, iface1)
+
+# start session
+session.instantiate()
+
+# do whatever you like here
+input("press enter to shutdown")
+
+# stop session
+session.shutdown()
+
+

EMANE Example

+

For EMANE you can import and use one of the existing models and +use its name for configuration.

+

Current models:

+
    +
  • core.emane.ieee80211abg.EmaneIeee80211abgModel
  • +
  • core.emane.rfpipe.EmaneRfPipeModel
  • +
  • core.emane.tdma.EmaneTdmaModel
  • +
  • core.emane.bypass.EmaneBypassModel
  • +
+

Their configurations options are driven dynamically from parsed EMANE manifest files +from the installed version of EMANE.

+

Options and their purpose can be found at the EMANE Wiki.

+

If configuring EMANE global settings or model mac/phy specific settings, any value not provided +will use the defaults. When no configuration is used, the defaults are used.

+
# required imports
+from core.emane.models.ieee80211abg import EmaneIeee80211abgModel
+from core.emane.nodes import EmaneNet
+from core.emulator.coreemu import CoreEmu
+from core.emulator.data import IpPrefixes
+from core.emulator.enumerations import EventTypes
+from core.nodes.base import CoreNode, Position
+
+# ip nerator for example
+ip_prefixes = IpPrefixes(ip4_prefix="10.0.0.0/24")
+
+# create emulator instance for creating sessions and utility methods
+coreemu = CoreEmu()
+session = coreemu.create_session()
+
+# location information is required to be set for emane
+session.location.setrefgeo(47.57917, -122.13232, 2.0)
+session.location.refscale = 150.0
+
+# must be in configuration state for nodes to start, when using "node_add" below
+session.set_state(EventTypes.CONFIGURATION_STATE)
+
+# create emane
+options = EmaneNet.create_options()
+options.emane_model = EmaneIeee80211abgModel.name
+position = Position(x=200, y=200)
+emane = session.add_node(EmaneNet, position=position, options=options)
+
+# create nodes
+options = CoreNode.create_options()
+options.model = "mdr"
+position = Position(x=100, y=100)
+n1 = session.add_node(CoreNode, position=position, options=options)
+position = Position(x=300, y=100)
+n2 = session.add_node(CoreNode, position=position, options=options)
+
+# configure general emane settings
+config = session.emane.get_configs()
+config.update({
+    "eventservicettl": "2"
+})
+
+# configure emane model settings
+# using a dict mapping currently support values as strings
+session.emane.set_model_config(emane.id, EmaneIeee80211abgModel.name, {
+    "unicastrate": "3",
+})
+
+# link nodes to emane
+iface1 = ip_prefixes.create_iface(n1)
+session.add_link(n1.id, emane.id, iface1)
+iface1 = ip_prefixes.create_iface(n2)
+session.add_link(n2.id, emane.id, iface1)
+
+# start session
+session.instantiate()
+
+# do whatever you like here
+input("press enter to shutdown")
+
+# stop session
+session.shutdown()
+
+

EMANE Model Configuration:

+
from core import utils
+
+# standardized way to retrieve an appropriate config id
+# iface id can be omitted, to allow a general configuration for a model, per node
+config_id = utils.iface_config_id(node.id, iface_id)
+# set emane configuration for the config id
+session.emane.set_config(config_id, EmaneIeee80211abgModel.name, {
+    "unicastrate": "3",
+})
+
+

Configuring a Service

+

Services help generate and run bash scripts on nodes for a given purpose.

+

Configuring the files of a service results in a specific hard coded script being +generated, instead of the default scripts, that may leverage dynamic generation.

+

The following features can be configured for a service:

+
    +
  • configs - files that will be generated
  • +
  • dirs - directories that will be mounted unique to the node
  • +
  • startup - commands to run start a service
  • +
  • validate - commands to run to validate a service
  • +
  • shutdown - commands to run to stop a service
  • +
+

Editing service properties:

+
# configure a service, for a node, for a given session
+session.services.set_service(node_id, service_name)
+service = session.services.get_service(node_id, service_name)
+service.configs = ("file1.sh", "file2.sh")
+service.dirs = ("/etc/node",)
+service.startup = ("bash file1.sh",)
+service.validate = ()
+service.shutdown = ()
+
+

When editing a service file, it must be the name of config +file that the service will generate.

+

Editing a service file:

+
# to edit the contents of a generated file you can specify
+# the service, the file name, and its contents
+session.services.set_service_file(
+    node_id,
+    service_name,
+    file_name,
+    "echo hello",
+)
+
+

File Examples

+

File versions of the network examples can be found +here.

+

Executing Scripts from GUI

+

To execute a python script from a GUI you need have the following.

+

The builtin name check here to know it is being executed from the GUI, this can +be avoided if your script does not use a name check.

+
if __name__ in ["__main__", "__builtin__"]:
+    main()
+
+

A script can add sessions to the core-daemon. A global coreemu variable is +exposed to the script pointing to the CoreEmu object.

+

The example below has a fallback to a new CoreEmu object, in the case you would +like to run the script standalone, outside of the core-daemon.

+
coreemu = globals().get("coreemu") or CoreEmu()
+session = coreemu.create_session()
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 00000000..9fedafe9 --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"CORE Documentation","text":""},{"location":"index.html#introduction","title":"Introduction","text":"

CORE (Common Open Research Emulator) is a tool for building virtual networks. As an emulator, CORE builds a representation of a real computer network that runs in real time, as opposed to simulation, where abstract models are used. The live-running emulation can be connected to physical networks and routers. It provides an environment for running real applications and protocols, taking advantage of tools provided by the Linux operating system.

CORE is typically used for network and protocol research, demonstrations, application and platform testing, evaluating networking scenarios, security studies, and increasing the size of physical test networks.

"},{"location":"index.html#key-features","title":"Key Features","text":"
  • Efficient and scalable
  • Runs applications and protocols without modification
  • Drag and drop GUI
  • Highly customizable
"},{"location":"architecture.html","title":"CORE Architecture","text":""},{"location":"architecture.html#main-components","title":"Main Components","text":"
  • core-daemon
    • Manages emulated sessions of nodes and links for a given network
    • Nodes are created using Linux namespaces
    • Links are created using Linux bridges and virtual ethernet peers
    • Packets sent over links are manipulated using traffic control
    • Provides gRPC API
  • core-gui
    • GUI and daemon communicate over gRPC API
    • Drag and drop creation for nodes and links
    • Can launch terminals for emulated nodes in running sessions
    • Can save/open scenario files to recreate previous sessions
  • vnoded
    • Command line utility for creating CORE node namespaces
  • vcmd
    • Command line utility for sending shell commands to nodes
"},{"location":"architecture.html#sessions","title":"Sessions","text":"

CORE can create and run multiple emulated sessions at once, below is an overview of the states a session will transition between during typical GUI interactions.

"},{"location":"architecture.html#how-does-it-work","title":"How Does it Work?","text":"

The CORE framework runs on Linux and uses Linux namespacing for creating node containers. These nodes are linked together using Linux bridging and virtual interfaces. CORE sessions are a set of nodes and links operating together for a specific purpose.

"},{"location":"architecture.html#linux","title":"Linux","text":"

Linux network namespaces (also known as netns) is the primary technique used by CORE. Most recent Linux distributions have namespaces-enabled kernels out of the box. Each namespace has its own process environment and private network stack. Network namespaces share the same filesystem in CORE.

CORE combines these namespaces with Linux Ethernet bridging to form networks. Link characteristics are applied using Linux Netem queuing disciplines. Nftables provides Ethernet frame filtering on Linux bridges. Wireless networks are emulated by controlling which interfaces can send and receive with nftables rules.

"},{"location":"architecture.html#open-source-project-and-resources","title":"Open Source Project and Resources","text":"

CORE has been released by Boeing to the open source community under the BSD license. If you find CORE useful for your work, please contribute back to the project. Contributions can be as simple as reporting a bug, dropping a line of encouragement, or can also include submitting patches or maintaining aspects of the tool.

"},{"location":"configservices.html","title":"Config Services","text":""},{"location":"configservices.html#overview","title":"Overview","text":"

Config services are a newer version of services for CORE, that leverage a templating engine, for more robust service file creation. They also have the power of configuration key/value pairs that values that can be defined and displayed within the GUI, to help further tweak a service, as needed.

CORE services are a convenience for creating reusable dynamic scripts to run on nodes, for carrying out specific task(s).

This boilds down to the following functions:

  • generating files the service will use, either directly for commands or for configuration
  • command(s) for starting a service
  • command(s) for validating a service
  • command(s) for stopping a service

Most CORE nodes will have a default set of services to run, associated with them. You can however customize the set of services a node will use. Or even further define a new node type within the GUI, with a set of services, that will allow quickly dragging and dropping that node type during creation.

"},{"location":"configservices.html#available-services","title":"Available Services","text":"Service Group Services BIRD BGP, OSPF, RADV, RIP, Static EMANE Transport Service FRR BABEL, BGP, OSPFv2, OSPFv3, PIMD, RIP, RIPNG, Zebra NRL arouted, MGEN Sink, MGEN Actor, NHDP, OLSR, OLSRORG, OLSRv2, SMF Quagga BABEL, BGP, OSPFv2, OSPFv3, OSPFv3 MDR, RIP, RIPNG, XPIMD, Zebra SDN OVS, RYU Security Firewall, IPsec, NAT, VPN Client, VPN Server Utility ATD, Routing Utils, DHCP, FTP, IP Forward, PCAP, RADVD, SSF, UCARP XORP BGP, OLSR, OSPFv2, OSPFv3, PIMSM4, PIMSM6, RIP, RIPNG, Router Manager"},{"location":"configservices.html#node-types-and-default-services","title":"Node Types and Default Services","text":"

Here are the default node types and their services:

Node Type Services router zebra, OSFPv2, OSPFv3, and IPForward services for IGP link-state routing. PC DefaultRoute service for having a default route when connected directly to a router. mdr zebra, OSPFv3MDR, and IPForward services for wireless-optimized MANET Designated Router routing. prouter a physical router, having the same default services as the router node type; for incorporating Linux testbed machines into an emulation.

Configuration files can be automatically generated by each service. For example, CORE automatically generates routing protocol configuration for the router nodes in order to simplify the creation of virtual networks.

To change the services associated with a node, double-click on the node to invoke its configuration dialog and click on the Services... button, or right-click a node a choose Services... from the menu. Services are enabled or disabled by clicking on their names. The button next to each service name allows you to customize all aspects of this service for this node. For example, special route redistribution commands could be inserted in to the Quagga routing configuration associated with the zebra service.

To change the default services associated with a node type, use the Node Types dialog available from the Edit button at the end of the Layer-3 nodes toolbar, or choose Node types... from the Session menu. Note that any new services selected are not applied to existing nodes if the nodes have been customized.

The node types are saved in the GUI config file ~/.coregui/config.yaml. Keep this in mind when changing the default services for existing node types; it may be better to simply create a new node type. It is recommended that you do not change the default built-in node types.

"},{"location":"configservices.html#new-services","title":"New Services","text":"

Services can save time required to configure nodes, especially if a number of nodes require similar configuration procedures. New services can be introduced to automate tasks.

"},{"location":"configservices.html#creating-new-services","title":"Creating New Services","text":"

Note

The directory base name used in custom_services_dir below should be unique and should not correspond to any existing Python module name. For example, don't use the name subprocess or services.

  1. Modify the example service shown below to do what you want. It could generate config/script files, mount per-node directories, start processes/scripts, etc. Your file can define one or more classes to be imported. You can create multiple Python files that will be imported.

  2. Put these files in a directory such as ~/.coregui/custom_services.

  3. Add a custom_config_services_dir = ~/.coregui/custom_services entry to the /etc/core/core.conf file.

  4. Restart the CORE daemon (core-daemon). Any import errors (Python syntax) should be displayed in the terminal (or service log, like journalctl).

  5. Start using your custom service on your nodes. You can create a new node type that uses your service, or change the default services for an existing node type, or change individual nodes.

"},{"location":"configservices.html#example-custom-service","title":"Example Custom Service","text":"

Below is the skeleton for a custom service with some documentation. Most people would likely only setup the required class variables (name/group). Then define the files to generate and implement the get_text_template function to dynamically create the files wanted. Finally, the startup commands would be supplied, which typically tend to be running the shell files generated.

from typing import Dict, List\n\nfrom core.config import ConfigString, ConfigBool, Configuration\nfrom core.configservice.base import ConfigService, ConfigServiceMode, ShadowDir\n\n\n# class that subclasses ConfigService\nclass ExampleService(ConfigService):\n    # unique name for your service within CORE\n    name: str = \"Example\"\n    # the group your service is associated with, used for display in GUI\n    group: str = \"ExampleGroup\"\n    # directories that the service should shadow mount, hiding the system directory\n    directories: List[str] = [\n        \"/usr/local/core\",\n    ]\n    # files that this service should generate, defaults to nodes home directory\n    # or can provide an absolute path to a mounted directory\n    files: List[str] = [\n        \"example-start.sh\",\n        \"/usr/local/core/file1\",\n    ]\n    # executables that should exist on path, that this service depends on\n    executables: List[str] = []\n    # other services that this service depends on, can be used to define service start order\n    dependencies: List[str] = []\n    # commands to run to start this service\n    startup: List[str] = []\n    # commands to run to validate this service\n    validate: List[str] = []\n    # commands to run to stop this service\n    shutdown: List[str] = []\n    # validation mode, blocking, non-blocking, and timer\n    validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING\n    # configurable values that this service can use, for file generation\n    default_configs: List[Configuration] = [\n        ConfigString(id=\"value1\", label=\"Text\"),\n        ConfigBool(id=\"value2\", label=\"Boolean\"),\n        ConfigString(id=\"value3\", label=\"Multiple Choice\", options=[\"value1\", \"value2\", \"value3\"]),\n    ]\n    # sets of values to set for the configuration defined above, can be used to\n    # provide convenient sets of values to typically use\n    modes: Dict[str, Dict[str, str]] = {\n        \"mode1\": {\"value1\": \"value1\", \"value2\": \"0\", \"value3\": \"value2\"},\n        \"mode2\": {\"value1\": \"value2\", \"value2\": \"1\", \"value3\": \"value3\"},\n        \"mode3\": {\"value1\": \"value3\", \"value2\": \"0\", \"value3\": \"value1\"},\n    }\n    # defines directories that this service can help shadow within a node\n    shadow_directories: List[ShadowDir] = [\n        ShadowDir(path=\"/user/local/core\", src=\"/opt/core\")\n    ]\n\n    def get_text_template(self, name: str) -> str:\n        return \"\"\"\n        # sample script 1\n        # node id(${node.id}) name(${node.name})\n        # config: ${config}\n        echo hello\n        \"\"\"\n
"},{"location":"configservices.html#validation-mode","title":"Validation Mode","text":"

Validation modes are used to determine if a service has started up successfully.

  • blocking - startup commands are expected to run til completion and return 0 exit code
  • non-blocking - startup commands are ran, but do not wait for completion
  • timer - startup commands are ran, and an arbitrary amount of time is waited to consider started
"},{"location":"configservices.html#shadow-directories","title":"Shadow Directories","text":"

Shadow directories provide a convenience for copying a directory and the files within it to a nodes home directory, to allow a unique set of per node files.

  • ShadowDir(path=\"/user/local/core\") - copies files at the given location into the node
  • ShadowDir(path=\"/user/local/core\", src=\"/opt/core\") - copies files to the given location, but sourced from the provided location
  • ShadowDir(path=\"/user/local/core\", templates=True) - copies files and treats them as templates for generation
  • ShadowDir(path=\"/user/local/core\", has_node_paths=True) - copies files from the given location, and looks for unique node names directories within it, using a directory named default, when not preset
"},{"location":"ctrlnet.html","title":"CORE Control Network","text":""},{"location":"ctrlnet.html#overview","title":"Overview","text":"

The CORE control network allows the virtual nodes to communicate with their host environment. There are two types: the primary control network and auxiliary control networks. The primary control network is used mainly for communicating with the virtual nodes from host machines and for master-slave communications in a multi-server distributed environment. Auxiliary control networks have been introduced to for routing namespace hosted emulation software traffic to the test network.

"},{"location":"ctrlnet.html#activating-the-primary-control-network","title":"Activating the Primary Control Network","text":"

Under the Session Menu, the Options... dialog has an option to set a control network prefix.

This can be set to a network prefix such as 172.16.0.0/24. A bridge will be created on the host machine having the last address in the prefix range (e.g. 172.16.0.254), and each node will have an extra ctrl0 control interface configured with an address corresponding to its node number (e.g. 172.16.0.3 for n3.)

A default for the primary control network may also be specified by setting the controlnet line in the /etc/core/core.conf configuration file which new sessions will use by default. To simultaneously run multiple sessions with control networks, the session option should be used instead of the core.conf default.

Note

If you have a large scenario with more than 253 nodes, use a control network prefix that allows more than the suggested /24, such as /23 or greater.

Note

Running a session with a control network can fail if a previous session has set up a control network and the its bridge is still up. Close the previous session first or wait for it to complete. If unable to, the core-daemon may need to be restarted and the lingering bridge(s) removed manually.

# Restart the CORE Daemon\nsudo /etc/init.d core-daemon restart\n\n# Remove lingering control network bridges\nctrlbridges=`brctl show | grep b.ctrl | awk '{print $1}'`\nfor cb in $ctrlbridges; do\nsudo ifconfig $cb down\n  sudo brctl delbr $cb\ndone\n

Note

If adjustments to the primary control network configuration made in /etc/core/core.conf do not seem to take affect, check if there is anything set in the Session Menu, the Options... dialog. They may need to be cleared. These per session settings override the defaults in /etc/core/core.conf.

"},{"location":"ctrlnet.html#control-network-in-distributed-sessions","title":"Control Network in Distributed Sessions","text":"

When the primary control network is activated for a distributed session, a control network bridge will be created on each of the slave servers, with GRE tunnels back to the master server's bridge. The slave control bridges are not assigned an address. From the host, any of the nodes (local or remote) can be accessed, just like the single server case.

In some situations, remote emulated nodes need to communicate with the host on which they are running and not the master server. Multiple control network prefixes can be specified in the either the session option or /etc/core/core.conf, separated by spaces and beginning with the master server. Each entry has the form \"server:prefix\". For example, if the servers core1,core2, and core3 are assigned with nodes in the scenario and using /etc/core/core.conf instead of the session option.

controlnet=core1:172.16.1.0/24 core2:172.16.2.0/24 core3:172.16.1.0/24\n

Then, the control network bridges will be assigned as follows:

  • core1 = 172.16.1.254 (assuming it is the master server),
  • core2 = 172.16.2.254
  • core3 = 172.16.3.254

Tunnels back to the master server will still be built, but it is up to the user to add appropriate routes if networking between control network prefixes is desired. The control network script may help with this.

"},{"location":"ctrlnet.html#control-network-script","title":"Control Network Script","text":"

A control network script may be specified using the controlnet_updown_script option in the /etc/core/core.conf file. This script will be run after the bridge has been built (and address assigned) with the first argument being the name of the bridge, and the second argument being the keyword \"startup\". The script will again be invoked prior to bridge removal with the second argument being the keyword \"shutdown\".

"},{"location":"ctrlnet.html#auxiliary-control-networks","title":"Auxiliary Control Networks","text":"

Starting with EMANE 0.9.2, CORE will run EMANE instances within namespaces. Since it is advisable to separate the OTA traffic from other traffic, we will need more than single channel leading out from the namespace. Up to three auxiliary control networks may be defined. Multiple control networks are set up in /etc/core/core.conf file. Lines controlnet1, controlnet2 and controlnet3 define the auxiliary networks.

For example, having the following /etc/core/core.conf:

controlnet = core1:172.17.1.0/24 core2:172.17.2.0/24 core3:172.17.3.0/24\ncontrolnet1 = core1:172.18.1.0/24 core2:172.18.2.0/24 core3:172.18.3.0/24\ncontrolnet2 = core1:172.19.1.0/24 core2:172.19.2.0/24 core3:172.19.3.0/24\n

This will activate the primary and two auxiliary control networks and add interfaces ctrl0, ctrl1, ctrl2 to each node. One use case would be to assign ctrl1 to the OTA manager device and ctrl2 to the Event Service device in the EMANE Options dialog box and leave ctrl0 for CORE control traffic.

Note

controlnet0 may be used in place of controlnet to configure the primary control network.

Unlike the primary control network, the auxiliary control networks will not employ tunneling since their primary purpose is for efficiently transporting multicast EMANE OTA and event traffic. Note that there is no per-session configuration for auxiliary control networks.

To extend the auxiliary control networks across a distributed test environment, host network interfaces need to be added to them. The following lines in /etc/core/core.conf will add host devices eth1, eth2 and eth3 to controlnet1, controlnet2, controlnet3:

controlnetif1 = eth1\ncontrolnetif2 = eth2\ncontrolnetif3 = eth3\n

Note

There is no need to assign an interface to the primary control network because tunnels are formed between the master and the slaves using IP addresses that are provided in servers.conf.

Shown below is a representative diagram of the configuration above.

"},{"location":"devguide.html","title":"CORE Developer's Guide","text":""},{"location":"devguide.html#overview","title":"Overview","text":"

The CORE source consists of several programming languages for historical reasons. Current development focuses on the Python modules and daemon. Here is a brief description of the source directories.

Directory Description daemon Python CORE daemon/gui code that handles receiving API calls and creating containers docs Markdown Documentation currently hosted on GitHub man Template files for creating man pages for various CORE command line utilities netns C program for creating CORE containers"},{"location":"devguide.html#getting-started","title":"Getting started","text":"

To setup CORE for develop we will leverage to automated install script.

"},{"location":"devguide.html#clone-core-repo","title":"Clone CORE Repo","text":"
cd ~/Documents\ngit clone https://github.com/coreemu/core.git\ncd core\ngit checkout develop\n
"},{"location":"devguide.html#install-the-development-environment","title":"Install the Development Environment","text":"

This command will automatically install system dependencies, clone and build OSPF-MDR, build CORE, setup the CORE poetry environment, and install pre-commit hooks. You can refer to the install docs for issues related to different distributions.

./install -d\n
"},{"location":"devguide.html#pre-commit","title":"pre-commit","text":"

pre-commit hooks help automate running tools to check modified code. Every time a commit is made python utilities will be ran to check validity of code, potentially failing and backing out the commit. These changes are currently mandated as part of the current CI, so add the changes and commit again.

"},{"location":"devguide.html#running-core","title":"Running CORE","text":"

You can now run core as you normally would, or leverage some of the invoke tasks to conveniently run tests, etc.

# run core-daemon\nsudo core-daemon\n\n# run gui\ncore-gui\n\n# run mocked unit tests\ncd <CORE_REPO>\ninv test-mock\n
"},{"location":"devguide.html#linux-network-namespace-commands","title":"Linux Network Namespace Commands","text":"

Linux network namespace containers are often managed using the Linux Container Tools or lxc-tools package. The lxc-tools website is available here http://lxc.sourceforge.net/ for more information. CORE does not use these management utilities, but includes its own set of tools for instantiating and configuring network namespace containers. This section describes these tools.

"},{"location":"devguide.html#vnoded","title":"vnoded","text":"

The vnoded daemon is the program used to create a new namespace, and listen on a control channel for commands that may instantiate other processes. This daemon runs as PID 1 in the container. It is launched automatically by the CORE daemon. The control channel is a UNIX domain socket usually named /tmp/pycore.23098/n3, for node 3 running on CORE session 23098, for example. Root privileges are required for creating a new namespace.

"},{"location":"devguide.html#vcmd","title":"vcmd","text":"

The vcmd program is used to connect to the vnoded daemon in a Linux network namespace, for running commands in the namespace. The CORE daemon uses the same channel for setting up a node and running processes within it. This program has two required arguments, the control channel name, and the command line to be run within the namespace. This command does not need to run with root privileges.

When you double-click on a node in a running emulation, CORE will open a shell window for that node using a command such as:

gnome-terminal -e vcmd -c /tmp/pycore.50160/n1 -- bash\n

Similarly, the IPv4 routes Observer Widget will run a command to display the routing table using a command such as:

vcmd -c /tmp/pycore.50160/n1 -- /sbin/ip -4 ro\n
"},{"location":"devguide.html#core-cleanup-script","title":"core-cleanup script","text":"

A script named core-cleanup is provided to clean up any running CORE emulations. It will attempt to kill any remaining vnoded processes, kill any EMANE processes, remove the :file:/tmp/pycore.* session directories, and remove any bridges or nftables rules. With a -d option, it will also kill any running CORE daemon.

"},{"location":"devguide.html#netns-command","title":"netns command","text":"

The netns command is not used by CORE directly. This utility can be used to run a command in a new network namespace for testing purposes. It does not open a control channel for receiving further commands.

"},{"location":"devguide.html#other-useful-commands","title":"Other Useful Commands","text":"

Here are some other Linux commands that are useful for managing the Linux network namespace emulation.

# view the Linux bridging setup\nip link show type bridge\n# view the netem rules used for applying link effects\ntc qdisc show\n# view the rules that make the wireless LAN work\nnft list ruleset\n
"},{"location":"distributed.html","title":"CORE - Distributed Emulation","text":""},{"location":"distributed.html#overview","title":"Overview","text":"

A large emulation scenario can be deployed on multiple emulation servers and controlled by a single GUI. The GUI, representing the entire topology, can be run on one of the emulation servers or on a separate machine.

Each machine that will act as an emulation will require the installation of a distributed CORE package and some configuration to allow SSH as root.

"},{"location":"distributed.html#core-configuration","title":"CORE Configuration","text":"

CORE configuration settings required for using distributed functionality.

Edit /etc/core/core.conf or specific configuration file being used.

# uncomment and set this to the address that remote servers\n# use to get back to the main host, example below\ndistributed_address = 129.168.0.101\n
"},{"location":"distributed.html#emane-specific-configurations","title":"EMANE Specific Configurations","text":"

EMANE needs to have controlnet configured in core.conf in order to startup correctly. The names before the addresses need to match the names of distributed servers configured.

controlnet = core1:172.16.1.0/24 core2:172.16.2.0/24 core3:172.16.3.0/24 core4:172.16.4.0/24 core5:172.16.5.0/24\nemane_event_generate = True\n
"},{"location":"distributed.html#configuring-ssh","title":"Configuring SSH","text":"

Distributed CORE works using the python fabric library to run commands on remote servers over SSH.

"},{"location":"distributed.html#remote-gui-terminals","title":"Remote GUI Terminals","text":"

You need to have the same user defined on each server, since the user used for these remote shells is the same user that is running the CORE GUI.

Edit -> Preferences... -> Terminal program:

Currently recommend setting this to xterm -e as the default gnome-terminal will not work.

May need to install xterm if, not already installed.

sudo apt install xterm\n
"},{"location":"distributed.html#distributed-server-ssh-configuration","title":"Distributed Server SSH Configuration","text":"

First the distributed servers must be configured to allow passwordless root login over SSH.

On distributed server:

# install openssh-server\nsudo apt install openssh-server\n\n# open sshd config\nvi /etc/ssh/sshd_config\n\n# verify these configurations in file\nPermitRootLogin yes\nPasswordAuthentication yes\n\n# if desired add/modify the following line to allow SSH to\n# accept all env variables\nAcceptEnv *\n\n# restart sshd\nsudo systemctl restart sshd\n

On master server:

# install package if needed\nsudo apt install openssh-client\n\n# generate ssh key if needed\nssh-keygen -o -t rsa -b 4096 -f ~/.ssh/core\n\n# copy public key to authorized_keys file\nssh-copy-id -i ~/.ssh/core root@server\n\n# configure fabric to use the core ssh key\nsudo vi /etc/fabric.yml\n\n# set configuration\nconnect_kwargs: {\"key_filename\": \"/home/user/.ssh/core\"}\n

On distributed server:

# open sshd config\nvi /etc/ssh/sshd_config\n\n# change configuration for root login to without password\nPermitRootLogin without-password\n\n# restart sshd\nsudo systemctl restart sshd\n
"},{"location":"distributed.html#fabric-config-file","title":"Fabric Config File","text":"

Make sure the value used below is the absolute path to the file generated above ~/.ssh/core\"

Add/update the fabric configuration file /etc/fabric.yml:

connect_kwargs: { \"key_filename\": \"/home/user/.ssh/core\" }\n
"},{"location":"distributed.html#add-emulation-servers-in-gui","title":"Add Emulation Servers in GUI","text":"

Within the core-gui navigate to menu option:

Session -> Servers...

Within the dialog box presented, add or modify an existing server if present to use the name, address, and port for the a server you plan to use.

Server configurations are loaded and written to in a configuration file for the GUI.

"},{"location":"distributed.html#assigning-nodes","title":"Assigning Nodes","text":"

The user needs to assign nodes to emulation servers in the scenario. Making no assignment means the node will be emulated on the master server In the configuration window of every node, a drop-down box located between the Node name and the Image button will select the name of the emulation server. By default, this menu shows (none), indicating that the node will be emulated locally on the master. When entering Execute mode, the CORE GUI will deploy the node on its assigned emulation server.

Another way to assign emulation servers is to select one or more nodes using the select tool (ctrl-click to select multiple), and right-click one of the nodes and choose Assign to....

The CORE emulation servers dialog box may also be used to assign nodes to servers. The assigned server name appears in parenthesis next to the node name. To assign all nodes to one of the servers, click on the server name and then the all nodes button. Servers that have assigned nodes are shown in blue in the server list. Another option is to first select a subset of nodes, then open the CORE emulation servers box and use the selected nodes button.

IMPORTANT: Leave the nodes unassigned if they are to be run on the master server. Do not explicitly assign the nodes to the master server.

"},{"location":"distributed.html#gui-visualization","title":"GUI Visualization","text":"

If there is a link between two nodes residing on different servers, the GUI will draw the link with a dashed line.

"},{"location":"distributed.html#concerns-and-limitations","title":"Concerns and Limitations","text":"

Wireless nodes, i.e. those connected to a WLAN node, can be assigned to different emulation servers and participate in the same wireless network only if an EMANE model is used for the WLAN. The basic range model does not work across multiple servers due to the Linux bridging and nftables rules that are used.

Note

The basic range wireless model does not support distributed emulation, but EMANE does.

When nodes are linked across servers core-daemons will automatically create necessary tunnels between the nodes when executed. Care should be taken to arrange the topology such that the number of tunnels is minimized. The tunnels carry data between servers to connect nodes as specified in the topology. These tunnels are created using GRE tunneling, similar to the Tunnel Tool.

"},{"location":"distributed.html#distributed-checklist","title":"Distributed Checklist","text":"
  1. Install CORE on master server
  2. Install distributed CORE package on all servers needed
  3. Installed and configure public-key SSH access on all servers (if you want to use double-click shells or Widgets.) for both the GUI user (for terminals) and root for running CORE commands
  4. Update CORE configuration as needed
  5. Choose the servers that participate in distributed emulation.
  6. Assign nodes to desired servers, empty for master server.
  7. Press the Start button to launch the distributed emulation.
"},{"location":"docker.html","title":"Docker Node Support","text":""},{"location":"docker.html#overview","title":"Overview","text":"

Provided below is some information for helping setup and use Docker nodes within a CORE scenario.

"},{"location":"docker.html#installation","title":"Installation","text":""},{"location":"docker.html#debian-systems","title":"Debian Systems","text":"
sudo apt install docker.io\n
"},{"location":"docker.html#rhel-systems","title":"RHEL Systems","text":""},{"location":"docker.html#configuration","title":"Configuration","text":"

Custom configuration required to avoid iptable rules being added and removing the need for the default docker network, since core will be orchestrating connections between nodes.

Place the file below in /etc/docker/docker.json

{\n\"bridge\": \"none\",\n\"iptables\": false\n}\n
"},{"location":"docker.html#group-setup","title":"Group Setup","text":"

To use Docker nodes within the python GUI, you will need to make sure the user running the GUI is a member of the docker group.

# add group if does not exist\nsudo groupadd docker\n\n# add user to group\nsudo usermod -aG docker $USER\n\n# to get this change to take effect, log out and back in or run the following\nnewgrp docker\n
"},{"location":"docker.html#image-requirements","title":"Image Requirements","text":"

Images used by Docker nodes in CORE need to have networking tools installed for CORE to automate setup and configuration of the network within the container.

Example Dockerfile:

FROM ubuntu:latest\nRUN apt-get update\nRUN apt-get install -y iproute2 ethtool\n

Build image:

sudo docker build -t <name> .\n
"},{"location":"docker.html#tools-and-versions-tested-with","title":"Tools and Versions Tested With","text":"
  • Docker version 18.09.5, build e8ff056
  • nsenter from util-linux 2.31.1
"},{"location":"emane.html","title":"EMANE (Extendable Mobile Ad-hoc Network Emulator)","text":""},{"location":"emane.html#what-is-emane","title":"What is EMANE?","text":"

The Extendable Mobile Ad-hoc Network Emulator (EMANE) allows heterogeneous network emulation using a pluggable MAC and PHY layer architecture. The EMANE framework provides an implementation architecture for modeling different radio interface types in the form of Network Emulation Modules (NEMs) and incorporating these modules into a real-time emulation running in a distributed environment.

EMANE is developed by U.S. Naval Research Labs (NRL) Code 5522 and Adjacent Link LLC, who maintain these websites:

  • https://github.com/adjacentlink/emane
  • http://www.adjacentlink.com/

Instead of building Linux Ethernet bridging networks with CORE, higher-fidelity wireless networks can be emulated using EMANE bound to virtual devices. CORE emulates layers 3 and above (network, session, application) with its virtual network stacks and process space for protocols and applications, while EMANE emulates layers 1 and 2 (physical and data link) using its pluggable PHY and MAC models.

The interface between CORE and EMANE is a TAP device. CORE builds the virtual node using Linux network namespaces, installs the TAP device into the namespace and instantiates one EMANE process in the namespace. The EMANE process binds a user space socket to the TAP device for sending and receiving data from CORE.

An EMANE instance sends and receives OTA (Over-The-Air) traffic to and from other EMANE instances via a control port (e.g. ctrl0, ctrl1). It also sends and receives Events to and from the Event Service using the same or a different control port. EMANE models are configured through the GUI's configuration dialog. A corresponding EmaneModel Python class is sub-classed for each supported EMANE model, to provide configuration items and their mapping to XML files. This way new models can be easily supported. When CORE starts the emulation, it generates the appropriate XML files that specify the EMANE NEM configuration, and launches the EMANE daemons.

Some EMANE models support location information to determine when packets should be dropped. EMANE has an event system where location events are broadcast to all NEMs. CORE can generate these location events when nodes are moved on the canvas. The canvas size and scale dialog has controls for mapping the X,Y coordinate system to a latitude, longitude geographic system that EMANE uses. When specified in the core.conf configuration file, CORE can also subscribe to EMANE location events and move the nodes on the canvas as they are moved in the EMANE emulation. This would occur when an Emulation Script Generator, for example, is running a mobility script.

"},{"location":"emane.html#emane-in-core","title":"EMANE in CORE","text":"

This section will cover some high level topics and examples for running and using EMANE in CORE.

You can find more detailed tutorials and examples at the EMANE Tutorial.

Every topic below assumes CORE, EMANE, and OSPF MDR have been installed.

Info

Demo files will be found within the core-gui ~/.coregui/xmls directory

Topic Model Description XML Files RF Pipe Overview of generated XML files used to drive EMANE GPSD RF Pipe Overview of running and integrating gpsd with EMANE Precomputed RF Pipe Overview of using the precomputed propagation model EEL RF Pipe Overview of using the Emulation Event Log (EEL) Generator Antenna Profiles RF Pipe Overview of using antenna profiles in EMANE"},{"location":"emane.html#emane-configuration","title":"EMANE Configuration","text":"

The CORE configuration file /etc/core/core.conf has options specific to EMANE. An example emane section from the core.conf file is shown below:

# EMANE configuration\nemane_platform_port = 8101\nemane_transform_port = 8201\nemane_event_monitor = False\n#emane_models_dir = /home/<user>/.coregui/custom_emane\n# EMANE log level range [0,4] default: 2\nemane_log_level = 2\nemane_realtime = True\n# prefix used for emane installation\n# emane_prefix = /usr\n

If you have an EMANE event generator (e.g. mobility or pathloss scripts) and want to have CORE subscribe to EMANE location events, set the following line in the core.conf configuration file.

Note

Do not set this option to True if you want to manually drag nodes around on the canvas to update their location in EMANE.

emane_event_monitor = True\n

Another common issue is if installing EMANE from source, the default configure prefix will place the DTD files in /usr/local/share/emane/dtd while CORE expects them in /usr/share/emane/dtd.

Update the EMANE prefix configuration to resolve this problem.

emane_prefix = /usr/local\n
"},{"location":"emane.html#custom-emane-models","title":"Custom EMANE Models","text":"

CORE supports custom developed EMANE models by way of dynamically loading user created python files that represent the model. Custom EMANE models should be placed within the path defined by emane_models_dir in the CORE configuration file. This path cannot end in /emane.

Here is an example model with documentation describing functionality:

\"\"\"\nExample custom emane model.\n\"\"\"\nfrom pathlib import Path\nfrom typing import Dict, Optional, Set, List\n\nfrom core.config import Configuration\nfrom core.emane import emanemanifest, emanemodel\n\n\nclass ExampleModel(emanemodel.EmaneModel):\n\"\"\"\n    Custom emane model.\n\n    :cvar name: defines the emane model name that will show up in the GUI\n\n    Mac Definition:\n    :cvar mac_library: defines that mac library that the model will reference\n    :cvar mac_xml: defines the mac manifest file that will be parsed to obtain configuration options,\n        that will be displayed within the GUI\n    :cvar mac_defaults: allows you to override options that are maintained within the manifest file above\n    :cvar mac_config: parses the manifest file and converts configurations into core supported formats\n\n    Phy Definition:\n    NOTE: phy configuration will default to the universal model as seen below and the below section does not\n    have to be included\n    :cvar phy_library: defines that phy library that the model will reference, used if you need to\n        provide a custom phy\n    :cvar phy_xml: defines the phy manifest file that will be parsed to obtain configuration options,\n        that will be displayed within the GUI\n    :cvar phy_defaults: allows you to override options that are maintained within the manifest file above\n        or for the default universal model\n    :cvar phy_config: parses the manifest file and converts configurations into core supported formats\n\n    Custom Override Options:\n    NOTE: these options default to what's seen below and do not have to be included\n    :cvar config_ignore: allows you to ignore options within phy/mac, used typically if you needed to add\n        a custom option for display within the gui\n    \"\"\"\n\n    name: str = \"emane_example\"\n    mac_library: str = \"rfpipemaclayer\"\n    mac_xml: str = \"/usr/share/emane/manifest/rfpipemaclayer.xml\"\n    mac_defaults: Dict[str, str] = {\n        \"pcrcurveuri\": \"/usr/share/emane/xml/models/mac/rfpipe/rfpipepcr.xml\"\n    }\n    mac_config: List[Configuration] = []\n    phy_library: Optional[str] = None\n    phy_xml: str = \"/usr/share/emane/manifest/emanephy.xml\"\n    phy_defaults: Dict[str, str] = {\n        \"subid\": \"1\", \"propagationmodel\": \"2ray\", \"noisemode\": \"none\"\n    }\n    phy_config: List[Configuration] = []\n    config_ignore: Set[str] = set()\n\n    @classmethod\n    def load(cls, emane_prefix: Path) -> None:\n\"\"\"\n        Called after being loaded within the EmaneManager. Provides configured\n        emane_prefix for parsing xml files.\n\n        :param emane_prefix: configured emane prefix path\n        :return: nothing\n        \"\"\"\n        cls._load_platform_config(emane_prefix)\n        manifest_path = \"share/emane/manifest\"\n        # load mac configuration\n        mac_xml_path = emane_prefix / manifest_path / cls.mac_xml\n        cls.mac_config = emanemanifest.parse(mac_xml_path, cls.mac_defaults)\n        # load phy configuration\n        phy_xml_path = emane_prefix / manifest_path / cls.phy_xml\n        cls.phy_config = emanemanifest.parse(phy_xml_path, cls.phy_defaults)\n
"},{"location":"emane.html#single-pc-with-emane","title":"Single PC with EMANE","text":"

This section describes running CORE and EMANE on a single machine. This is the default mode of operation when building an EMANE network with CORE. The OTA manager and Event service interface are set to use ctrl0 and the virtual nodes use the primary control channel for communicating with one another. The primary control channel is automatically activated when a scenario involves EMANE. Using the primary control channel prevents your emulation session from sending multicast traffic on your local network and interfering with other EMANE users.

EMANE is configured through an EMANE node. Once a node is linked to an EMANE cloud, the radio interface on that node may also be configured separately (apart from the cloud.)

Right click on an EMANE node and select EMANE Config to open the configuration dialog. The EMANE models should be listed here for selection. (You may need to restart the CORE daemon if it was running prior to installing the EMANE Python bindings.)

When an EMANE model is selected, you can click on the models option button causing the GUI to query the CORE daemon for configuration items. Each model will have different parameters, refer to the EMANE documentation for an explanation of each item. The defaults values are presented in the dialog. Clicking Apply and Apply again will store the EMANE model selections.

The RF-PIPE and IEEE 802.11abg models use a Universal PHY that supports geographic location information for determining pathloss between nodes. A default latitude and longitude location is provided by CORE and this location-based pathloss is enabled by default; this is the pathloss mode setting for the Universal PHY. Moving a node on the canvas while the emulation is running generates location events for EMANE. To view or change the geographic location or scale of the canvas use the Canvas Size and Scale dialog available from the Canvas menu.

Note that conversion between geographic and Cartesian coordinate systems is done using UTM (Universal Transverse Mercator) projection, where different zones of 6 degree longitude bands are defined. The location events generated by CORE may become inaccurate near the zone boundaries for very large scenarios that span multiple UTM zones. It is recommended that EMANE location scripts be used to achieve geo-location accuracy in this situation.

Clicking the green Start button launches the emulation and causes TAP devices to be created in the virtual nodes that are linked to the EMANE WLAN. These devices appear with interface names such as eth0, eth1, etc. The EMANE processes should now be running in each namespace.

To view the configuration generated by CORE, look in the /tmp/pycore.nnnnn/ session directory to find the generated EMANE xml files. One easy way to view this information is by double-clicking one of the virtual nodes and listing the files in the shell.

"},{"location":"emane.html#distributed-emane","title":"Distributed EMANE","text":"

Running CORE and EMANE distributed among two or more emulation servers is similar to running on a single machine. There are a few key configuration items that need to be set in order to be successful, and those are outlined here.

It is a good idea to maintain separate networks for data (OTA) and control. The control network may be a shared laboratory network, for example, and you do not want multicast traffic on the data network to interfere with other EMANE users. Furthermore, control traffic could interfere with the OTA latency and throughput and might affect emulation fidelity. The examples described here will use eth0 as a control interface and eth1 as a data interface, although using separate interfaces is not strictly required. Note that these interface names refer to interfaces present on the host machine, not virtual interfaces within a node.

IMPORTANT: If an auxiliary control network is used, an interface on the host has to be assigned to that network.

Each machine that will act as an emulation server needs to have CORE distributed and EMANE installed. As well as be setup to work for CORE distributed mode.

The IP addresses of the available servers are configured from the CORE servers dialog box. The dialog shows available servers, some or all of which may be assigned to nodes on the canvas.

Nodes need to be assigned to servers and can be done so using the node configuration dialog. When a node is not assigned to any emulation server, it will be emulated locally.

Using the EMANE node configuration dialog. You can change the EMANE model being used, along with changing any configuration setting from their defaults.

Note

Here is a quick checklist for distributed emulation with EMANE.

  1. Follow the steps outlined for normal CORE.
  2. Assign nodes to desired servers
  3. Synchronize your machine's clocks prior to starting the emulation, using ntp or ptp. Some EMANE models are sensitive to timing.
  4. Press the Start button to launch the distributed emulation.

Now when the Start button is used to instantiate the emulation, the local CORE daemon will connect to other emulation servers that have been assigned to nodes. Each server will have its own session directory where the platform.xml file and other EMANE XML files are generated. The NEM IDs are automatically coordinated across servers so there is no overlap.

An Ethernet device is used for disseminating multicast EMANE events, as specified in the configure emane dialog. EMANE's Event Service can be run with mobility or pathloss scripts. If CORE is not subscribed to location events, it will generate them as nodes are moved on the canvas.

Double-clicking on a node during runtime will cause the GUI to attempt to SSH to the emulation server for that node and run an interactive shell. The public key SSH configuration should be tested with all emulation servers prior to starting the emulation.

"},{"location":"grpc.html","title":"gRPC","text":"
  • Table of Contents
"},{"location":"grpc.html#overview","title":"Overview","text":"

gRPC is a client/server API for interfacing with CORE and used by the python GUI for driving all functionality. It is dependent on having a running core-daemon instance to be leveraged.

A python client can be created from the raw generated grpc files included with CORE or one can leverage a provided gRPC client that helps encapsulate some functionality to try and help make things easier.

"},{"location":"grpc.html#python-client","title":"Python Client","text":"

A python client wrapper is provided at CoreGrpcClient to help provide some conveniences when using the API.

"},{"location":"grpc.html#client-http-proxy","title":"Client HTTP Proxy","text":"

Since gRPC is HTTP2 based, proxy configurations can cause issues. By default, the client disables proxy support to avoid issues when a proxy is present. You can enable and properly account for this issue when needed.

"},{"location":"grpc.html#proto-files","title":"Proto Files","text":"

Proto files are used to define the API and protobuf messages that are used for interfaces with this API.

They can be found here to see the specifics of what is going on and response message values that would be returned.

"},{"location":"grpc.html#examples","title":"Examples","text":""},{"location":"grpc.html#node-models","title":"Node Models","text":"

When creating nodes of type NodeType.DEFAULT these are the default models and the services they map to.

  • mdr
    • zebra, OSPFv3MDR, IPForward
  • PC
    • DefaultRoute
  • router
    • zebra, OSPFv2, OSPFv3, IPForward
  • host
    • DefaultRoute, SSH
"},{"location":"grpc.html#interface-helper","title":"Interface Helper","text":"

There is an interface helper class that can be leveraged for convenience when creating interface data for nodes. Alternatively one can manually create a core.api.grpc.wrappers.Interface class instead with appropriate information.

Manually creating gRPC client interface:

from core.api.grpc.wrappers import Interface\n\n# id is optional and will set to the next available id\n# name is optional and will default to eth<id>\n# mac is optional and will result in a randomly generated mac\niface = Interface(\n    id=0,\n    name=\"eth0\",\n    ip4=\"10.0.0.1\",\n    ip4_mask=24,\n    ip6=\"2001::\",\n    ip6_mask=64,\n)\n

Leveraging the interface helper class:

from core.api.grpc import client\n\niface_helper = client.InterfaceHelper(ip4_prefix=\"10.0.0.0/24\", ip6_prefix=\"2001::/64\")\n# node_id is used to get an ip4/ip6 address indexed from within the above prefixes\n# iface_id is required and used exactly for that\n# name is optional and would default to eth<id>\n# mac is optional and will result in a randomly generated mac\niface_data = iface_helper.create_iface(\n    node_id=1, iface_id=0, name=\"eth0\", mac=\"00:00:00:00:aa:00\"\n)\n
"},{"location":"grpc.html#listening-to-events","title":"Listening to Events","text":"

Various events that can occur within a session can be listened to.

Event types:

  • session - events for changes in session state and mobility start/stop/pause
  • node - events for node movements and icon changes
  • link - events for link configuration changes and wireless link add/delete
  • config - configuration events when legacy gui joins a session
  • exception - alert/error events
  • file - file events when the legacy gui joins a session
from core.api.grpc import client\nfrom core.api.grpc.wrappers import EventType\n\n\ndef event_listener(event):\n    print(event)\n\n\n# create grpc client and connect\ncore = client.CoreGrpcClient()\ncore.connect()\n\n# add session\nsession = core.create_session()\n\n# provide no events to listen to all events\ncore.events(session.id, event_listener)\n\n# provide events to listen to specific events\ncore.events(session.id, event_listener, [EventType.NODE])\n
"},{"location":"grpc.html#configuring-links","title":"Configuring Links","text":"

Links can be configured at the time of creation or during runtime.

Currently supported configuration options:

  • bandwidth (bps)
  • delay (us)
  • duplicate (%)
  • jitter (us)
  • loss (%)
from core.api.grpc import client\nfrom core.api.grpc.wrappers import LinkOptions, Position\n\n# interface helper\niface_helper = client.InterfaceHelper(ip4_prefix=\"10.0.0.0/24\", ip6_prefix=\"2001::/64\")\n\n# create grpc client and connect\ncore = client.CoreGrpcClient()\ncore.connect()\n\n# add session\nsession = core.create_session()\n\n# create nodes\nposition = Position(x=100, y=100)\nnode1 = session.add_node(1, position=position)\nposition = Position(x=300, y=100)\nnode2 = session.add_node(2, position=position)\n\n# configuring when creating a link\noptions = LinkOptions(\n    bandwidth=54_000_000,\n    delay=5000,\n    dup=5,\n    loss=5.5,\n    jitter=0,\n)\niface1 = iface_helper.create_iface(node1.id, 0)\niface2 = iface_helper.create_iface(node2.id, 0)\nlink = session.add_link(node1=node1, node2=node2, iface1=iface1, iface2=iface2)\n\n# configuring during runtime\nlink.options.loss = 10.0\ncore.edit_link(session.id, link)\n
"},{"location":"grpc.html#peer-to-peer-example","title":"Peer to Peer Example","text":"
# required imports\nfrom core.api.grpc import client\nfrom core.api.grpc.wrappers import Position\n\n# interface helper\niface_helper = client.InterfaceHelper(ip4_prefix=\"10.0.0.0/24\", ip6_prefix=\"2001::/64\")\n\n# create grpc client and connect\ncore = client.CoreGrpcClient()\ncore.connect()\n\n# add session\nsession = core.create_session()\n\n# create nodes\nposition = Position(x=100, y=100)\nnode1 = session.add_node(1, position=position)\nposition = Position(x=300, y=100)\nnode2 = session.add_node(2, position=position)\n\n# create link\niface1 = iface_helper.create_iface(node1.id, 0)\niface2 = iface_helper.create_iface(node2.id, 0)\nsession.add_link(node1=node1, node2=node2, iface1=iface1, iface2=iface2)\n\n# start session\ncore.start_session(session)\n
"},{"location":"grpc.html#switchhub-example","title":"Switch/Hub Example","text":"
# required imports\nfrom core.api.grpc import client\nfrom core.api.grpc.wrappers import NodeType, Position\n\n# interface helper\niface_helper = client.InterfaceHelper(ip4_prefix=\"10.0.0.0/24\", ip6_prefix=\"2001::/64\")\n\n# create grpc client and connect\ncore = client.CoreGrpcClient()\ncore.connect()\n\n# add session\nsession = core.create_session()\n\n# create nodes\nposition = Position(x=200, y=200)\nswitch = session.add_node(1, _type=NodeType.SWITCH, position=position)\nposition = Position(x=100, y=100)\nnode1 = session.add_node(2, position=position)\nposition = Position(x=300, y=100)\nnode2 = session.add_node(3, position=position)\n\n# create links\niface1 = iface_helper.create_iface(node1.id, 0)\nsession.add_link(node1=node1, node2=switch, iface1=iface1)\niface1 = iface_helper.create_iface(node2.id, 0)\nsession.add_link(node1=node2, node2=switch, iface1=iface1)\n\n# start session\ncore.start_session(session)\n
"},{"location":"grpc.html#wlan-example","title":"WLAN Example","text":"
# required imports\nfrom core.api.grpc import client\nfrom core.api.grpc.wrappers import NodeType, Position\n\n# interface helper\niface_helper = client.InterfaceHelper(ip4_prefix=\"10.0.0.0/24\", ip6_prefix=\"2001::/64\")\n\n# create grpc client and connect\ncore = client.CoreGrpcClient()\ncore.connect()\n\n# add session\nsession = core.create_session()\n\n# create nodes\nposition = Position(x=200, y=200)\nwlan = session.add_node(1, _type=NodeType.WIRELESS_LAN, position=position)\nposition = Position(x=100, y=100)\nnode1 = session.add_node(2, model=\"mdr\", position=position)\nposition = Position(x=300, y=100)\nnode2 = session.add_node(3, model=\"mdr\", position=position)\n\n# create links\niface1 = iface_helper.create_iface(node1.id, 0)\nsession.add_link(node1=node1, node2=wlan, iface1=iface1)\niface1 = iface_helper.create_iface(node2.id, 0)\nsession.add_link(node1=node2, node2=wlan, iface1=iface1)\n\n# set wlan config using a dict mapping currently\n# support values as strings\nwlan.set_wlan(\n    {\n        \"range\": \"280\",\n        \"bandwidth\": \"55000000\",\n        \"delay\": \"6000\",\n        \"jitter\": \"5\",\n        \"error\": \"5\",\n    }\n)\n\n# start session\ncore.start_session(session)\n
"},{"location":"grpc.html#emane-example","title":"EMANE Example","text":"

For EMANE you can import and use one of the existing models and use its name for configuration.

Current models:

  • core.emane.ieee80211abg.EmaneIeee80211abgModel
  • core.emane.rfpipe.EmaneRfPipeModel
  • core.emane.tdma.EmaneTdmaModel
  • core.emane.bypass.EmaneBypassModel

Their configurations options are driven dynamically from parsed EMANE manifest files from the installed version of EMANE.

Options and their purpose can be found at the EMANE Wiki.

If configuring EMANE global settings or model mac/phy specific settings, any value not provided will use the defaults. When no configuration is used, the defaults are used.

# required imports\nfrom core.api.grpc import client\nfrom core.api.grpc.wrappers import NodeType, Position\nfrom core.emane.models.ieee80211abg import EmaneIeee80211abgModel\n\n# interface helper\niface_helper = client.InterfaceHelper(ip4_prefix=\"10.0.0.0/24\", ip6_prefix=\"2001::/64\")\n\n# create grpc client and connect\ncore = client.CoreGrpcClient()\ncore.connect()\n\n# add session\nsession = core.create_session()\n\n# create nodes\nposition = Position(x=200, y=200)\nemane = session.add_node(\n    1, _type=NodeType.EMANE, position=position, emane=EmaneIeee80211abgModel.name\n)\nposition = Position(x=100, y=100)\nnode1 = session.add_node(2, model=\"mdr\", position=position)\nposition = Position(x=300, y=100)\nnode2 = session.add_node(3, model=\"mdr\", position=position)\n\n# create links\niface1 = iface_helper.create_iface(node1.id, 0)\nsession.add_link(node1=node1, node2=emane, iface1=iface1)\niface1 = iface_helper.create_iface(node2.id, 0)\nsession.add_link(node1=node2, node2=emane, iface1=iface1)\n\n# setting emane specific emane model configuration\nemane.set_emane_model(EmaneIeee80211abgModel.name, {\n    \"eventservicettl\": \"2\",\n    \"unicastrate\": \"3\",\n})\n\n# start session\ncore.start_session(session)\n

EMANE Model Configuration:

# emane network specific config, set on an emane node\n# this setting applies to all nodes connected\nemane.set_emane_model(EmaneIeee80211abgModel.name, {\"unicastrate\": \"3\"})\n\n# node specific config for an individual node connected to an emane network\nnode.set_emane_model(EmaneIeee80211abgModel.name, {\"unicastrate\": \"3\"})\n\n# node interface specific config for an individual node connected to an emane network\nnode.set_emane_model(EmaneIeee80211abgModel.name, {\"unicastrate\": \"3\"}, iface_id=0)\n
"},{"location":"grpc.html#configuring-a-service","title":"Configuring a Service","text":"

Services help generate and run bash scripts on nodes for a given purpose.

Configuring the files of a service results in a specific hard coded script being generated, instead of the default scripts, that may leverage dynamic generation.

The following features can be configured for a service:

  • files - files that will be generated
  • directories - directories that will be mounted unique to the node
  • startup - commands to run start a service
  • validate - commands to run to validate a service
  • shutdown - commands to run to stop a service

Editing service properties:

# configure a service, for a node, for a given session\nnode.service_configs[service_name] = NodeServiceData(\n    configs=[\"file1.sh\", \"file2.sh\"],\n    directories=[\"/etc/node\"],\n    startup=[\"bash file1.sh\"],\n    validate=[],\n    shutdown=[],\n)\n

When editing a service file, it must be the name of config file that the service will generate.

Editing a service file:

# to edit the contents of a generated file you can specify\n# the service, the file name, and its contents\nfile_configs = node.service_file_configs.setdefault(service_name, {})\nfile_configs[file_name] = \"echo hello world\"\n
"},{"location":"grpc.html#file-examples","title":"File Examples","text":"

File versions of the network examples can be found here. These examples will create a session using the gRPC API when the core-daemon is running.

You can then switch to and attach to these sessions using either of the CORE GUIs.

"},{"location":"gui.html","title":"CORE GUI","text":""},{"location":"gui.html#overview","title":"Overview","text":"

The GUI is used to draw nodes and network devices on a canvas, linking them together to create an emulated network session.

After pressing the start button, CORE will proceed through these phases, staying in the runtime phase. After the session is stopped, CORE will proceed to the data collection phase before tearing down the emulated state.

CORE can be customized to perform any action at each state. See the Hooks... entry on the Session Menu for details about when these session states are reached.

"},{"location":"gui.html#prerequisites","title":"Prerequisites","text":"

Beyond installing CORE, you must have the CORE daemon running. This is done on the command line with either systemd or sysv.

# systemd service\nsudo systemctl daemon-reload\nsudo systemctl start core-daemon\n\n# direct invocation\nsudo core-daemon\n
"},{"location":"gui.html#gui-files","title":"GUI Files","text":"

The GUI will create a directory in your home directory on first run called ~/.coregui. This directory will help layout various files that the GUI may use.

  • .coregui/
    • backgrounds/
      • place backgrounds used for display in the GUI
    • custom_emane/
      • place to keep custom emane models to use with the core-daemon
    • custom_services/
      • place to keep custom services to use with the core-daemon
    • icons/
      • icons the GUI uses along with customs icons desired
    • mobility/
      • place to keep custom mobility files
    • scripts/
      • place to keep core related scripts
    • xmls/
      • place to keep saved session xml files
    • gui.log
      • log file when running the gui, look here when issues occur for exceptions etc
    • config.yaml
      • configuration file used to save/load various gui related settings (custom nodes, layouts, addresses, etc)
"},{"location":"gui.html#modes-of-operation","title":"Modes of Operation","text":"

The CORE GUI has two primary modes of operation, Edit and Execute modes. Running the GUI, by typing core-gui with no options, starts in Edit mode. Nodes are drawn on a blank canvas using the toolbar on the left and configured from right-click menus or by double-clicking them. The GUI does not need to be run as root.

Once editing is complete, pressing the green Start button instantiates the topology and enters Execute mode. In execute mode, the user can interact with the running emulated machines by double-clicking or right-clicking on them. The editing toolbar disappears and is replaced by an execute toolbar, which provides tools while running the emulation. Pressing the red Stop button will destroy the running emulation and return CORE to Edit mode.

Once the emulation is running, the GUI can be closed, and a prompt will appear asking if the emulation should be terminated. The emulation may be left running and the GUI can reconnect to an existing session at a later time.

The GUI can be run as a normal user on Linux.

The GUI currently provides the following options on startup.

usage: core-gui [-h] [-l {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [-p]\n[-s SESSION] [--create-dir]\n\nCORE Python GUI\n\noptional arguments:\n  -h, --help            show this help message and exit\n-l {DEBUG,INFO,WARNING,ERROR,CRITICAL}, --level {DEBUG,INFO,WARNING,ERROR,CRITICAL}\nlogging level\n  -p, --proxy           enable proxy\n  -s SESSION, --session SESSION\n                        session id to join\n  --create-dir          create gui directory and exit\n
"},{"location":"gui.html#toolbar","title":"Toolbar","text":"

The toolbar is a row of buttons that runs vertically along the left side of the CORE GUI window. The toolbar changes depending on the mode of operation.

"},{"location":"gui.html#editing-toolbar","title":"Editing Toolbar","text":"

When CORE is in Edit mode (the default), the vertical Editing Toolbar exists on the left side of the CORE window. Below are brief descriptions for each toolbar item, starting from the top. Most of the tools are grouped into related sub-menus, which appear when you click on their group icon.

Icon Name Description Selection Tool Tool for selecting, moving, configuring nodes. Start Button Starts Execute mode, instantiates the emulation. Link Allows network links to be drawn between two nodes by clicking and dragging the mouse."},{"location":"gui.html#core-nodes","title":"CORE Nodes","text":"

These nodes will create a new node container and run associated services.

Icon Name Description Router Runs Quagga OSPFv2 and OSPFv3 routing to forward packets. Host Emulated server machine having a default route, runs SSH server. PC Basic emulated machine having a default route, runs no processes by default. MDR Runs Quagga OSPFv3 MDR routing for MANET-optimized routing. PRouter Physical router represents a real testbed machine."},{"location":"gui.html#network-nodes","title":"Network Nodes","text":"

These nodes are mostly used to create a Linux bridge that serves the purpose described below.

Icon Name Description Hub Ethernet hub forwards incoming packets to every connected node. Switch Ethernet switch intelligently forwards incoming packets to attached hosts using an Ethernet address hash table. Wireless LAN When routers are connected to this WLAN node, they join a wireless network and an antenna is drawn instead of a connecting line; the WLAN node typically controls connectivity between attached wireless nodes based on the distance between them. RJ45 RJ45 Physical Interface Tool, emulated nodes can be linked to real physical interfaces; using this tool, real networks and devices can be physically connected to the live-running emulation. Tunnel Tool allows connecting together more than one CORE emulation using GRE tunnels."},{"location":"gui.html#annotation-tools","title":"Annotation Tools","text":"Icon Name Description Marker For drawing marks on the canvas. Oval For drawing circles on the canvas that appear in the background. Rectangle For drawing rectangles on the canvas that appear in the background. Text For placing text captions on the canvas."},{"location":"gui.html#execution-toolbar","title":"Execution Toolbar","text":"

When the Start button is pressed, CORE switches to Execute mode, and the Edit toolbar on the left of the CORE window is replaced with the Execution toolbar Below are the items on this toolbar, starting from the top.

Icon Name Description Stop Button Stops Execute mode, terminates the emulation, returns CORE to edit mode. Selection Tool In Execute mode, the Selection Tool can be used for moving nodes around the canvas, and double-clicking on a node will open a shell window for that node; right-clicking on a node invokes a pop-up menu of run-time options for that node. Marker For drawing freehand lines on the canvas, useful during demonstrations; markings are not saved. Run Tool This tool allows easily running a command on all or a subset of all nodes. A list box allows selecting any of the nodes. A text entry box allows entering any command. The command should return immediately, otherwise the display will block awaiting response. The ping command, for example, with no parameters, is not a good idea. The result of each command is displayed in a results box. The first occurrence of the special text \"NODE\" will be replaced with the node name. The command will not be attempted to run on nodes that are not routers, PCs, or hosts, even if they are selected."},{"location":"gui.html#menu","title":"Menu","text":"

The menubar runs along the top of the CORE GUI window and provides access to a variety of features. Some of the menus are detachable, such as the Widgets menu, by clicking the dashed line at the top.

"},{"location":"gui.html#file-menu","title":"File Menu","text":"

The File menu contains options for saving and opening saved sessions.

Option Description New Session This starts a new session with an empty canvas. Save Saves the current topology. If you have not yet specified a file name, the Save As dialog box is invoked. Save As Invokes the Save As dialog box for selecting a new .xml file for saving the current configuration in the XML file. Open Invokes the File Open dialog box for selecting a new XML file to open. Recently used files Above the Quit menu command is a list of recently use files, if any have been opened. You can clear this list in the Preferences dialog box. You can specify the number of files to keep in this list from the Preferences dialog. Click on one of the file names listed to open that configuration file. Execute Python Script Invokes a File Open dialog box for selecting a Python script to run and automatically connect to. After a selection is made, a Python Script Options dialog box is invoked to allow for command-line options to be added. The Python script must create a new CORE Session and add this session to the daemon's list of sessions in order for this to work. Quit The Quit command should be used to exit the CORE GUI. CORE may prompt for termination if you are currently in Execute mode. Preferences and the recently-used files list are saved."},{"location":"gui.html#edit-menu","title":"Edit Menu","text":"Option Description Preferences Invokes the Preferences dialog box. Custom Nodes Custom node creation dialog box. Undo (Disabled) Attempts to undo the last edit in edit mode. Redo (Disabled) Attempts to redo an edit that has been undone. Cut, Copy, Paste, Delete Used to cut, copy, paste, and delete a selection. When nodes are pasted, their node numbers are automatically incremented, and existing links are preserved with new IP addresses assigned. Services and their customizations are copied to the new node, but care should be taken as node IP addresses have changed with possibly old addresses remaining in any custom service configurations. Annotations may also be copied and pasted."},{"location":"gui.html#canvas-menu","title":"Canvas Menu","text":"

The canvas menu provides commands related to the editing canvas.

Option Description Size/scale Invokes a Canvas Size and Scale dialog that allows configuring the canvas size, scale, and geographic reference point. The size controls allow changing the width and height of the current canvas, in pixels or meters. The scale allows specifying how many meters are equivalent to 100 pixels. The reference point controls specify the latitude, longitude, and altitude reference point used to convert between geographic and Cartesian coordinate systems. By clicking the Save as default option, all new canvases will be created with these properties. The default canvas size can also be changed in the Preferences dialog box. Wallpaper Used for setting the canvas background image."},{"location":"gui.html#view-menu","title":"View Menu","text":"

The View menu features items for toggling on and off their display on the canvas.

Option Description Interface Names Display interface names on links. IPv4 Addresses Display IPv4 addresses on links. IPv6 Addresses Display IPv6 addresses on links. Node Labels Display node names. Link Labels Display link labels. Annotations Display annotations. Canvas Grid Display the canvas grid."},{"location":"gui.html#tools-menu","title":"Tools Menu","text":"

The tools menu lists different utility functions.

Option Description Find Display find dialog used for highlighting a node on the canvas. Auto Grid Automatically layout nodes in a grid. IP addresses Invokes the IP Addresses dialog box for configuring which IPv4/IPv6 prefixes are used when automatically addressing new interfaces. MAC addresses Invokes the MAC Addresses dialog box for configuring the starting number used as the lowest byte when generating each interface MAC address. This value should be changed when tunneling between CORE emulations to prevent MAC address conflicts."},{"location":"gui.html#widgets-menu","title":"Widgets Menu","text":"

Widgets are GUI elements that allow interaction with a running emulation. Widgets typically automate the running of commands on emulated nodes to report status information of some type and display this on screen.

"},{"location":"gui.html#periodic-widgets","title":"Periodic Widgets","text":"

These Widgets are those available from the main Widgets menu. More than one of these Widgets may be run concurrently. An event loop fires once every second that the emulation is running. If one of these Widgets is enabled, its periodic routine will be invoked at this time. Each Widget may have a configuration dialog box which is also accessible from the Widgets menu.

Here are some standard widgets:

  • Adjacency - displays router adjacency states for Quagga's OSPFv2 and OSPFv3 routing protocols. A line is drawn from each router halfway to the router ID of an adjacent router. The color of the line is based on the OSPF adjacency state such as Two-way or Full. To learn about the different colors, see the Configure Adjacency... menu item. The vtysh command is used to dump OSPF neighbor information. Only half of the line is drawn because each router may be in a different adjacency state with respect to the other.
  • Throughput - displays the kilobits-per-second throughput above each link, using statistics gathered from each link. If the throughput exceeds a certain threshold, the link will become highlighted. For wireless nodes which broadcast data to all nodes in range, the throughput rate is displayed next to the node and the node will become circled if the threshold is exceeded.
"},{"location":"gui.html#observer-widgets","title":"Observer Widgets","text":"

These Widgets are available from the Observer Widgets submenu of the Widgets menu, and from the Widgets Tool on the toolbar. Only one Observer Widget may be used at a time. Mouse over a node while the session is running to pop up an informational display about that node.

Available Observer Widgets include IPv4 and IPv6 routing tables, socket information, list of running processes, and OSPFv2/v3 neighbor information.

Observer Widgets may be edited by the user and rearranged. Choosing Widgets->Observer Widgets->Edit Observers from the Observer Widget menu will invoke the Observer Widgets dialog. A list of Observer Widgets is displayed along with up and down arrows for rearranging the list. Controls are available for renaming each widget, for changing the command that is run during mouse over, and for adding and deleting items from the list. Note that specified commands should return immediately to avoid delays in the GUI display. Changes are saved to a config.yaml file in the CORE configuration directory.

"},{"location":"gui.html#session-menu","title":"Session Menu","text":"

The Session Menu has entries for starting, stopping, and managing sessions, in addition to global options such as node types, comments, hooks, servers, and options.

Option Description Sessions Invokes the CORE Sessions dialog box containing a list of active CORE sessions in the daemon. Basic session information such as name, node count, start time, and a thumbnail are displayed. This dialog allows connecting to different sessions, shutting them down, or starting a new session. Servers Invokes the CORE emulation servers dialog for configuring. Options Presents per-session options, such as the IPv4 prefix to be used, if any, for a control network the ability to preserve the session directory; and an on/off switch for SDT3D support. Hooks Invokes the CORE Session Hooks window where scripts may be configured for a particular session state. The session states are defined in the table below. The top of the window has a list of configured hooks, and buttons on the bottom left allow adding, editing, and removing hook scripts. The new or edit button will open a hook script editing window. A hook script is a shell script invoked on the host (not within a virtual node)."},{"location":"gui.html#session-states","title":"Session States","text":"State Description Definition Used by the GUI to tell the backend to clear any state. Configuration When the user presses the Start button, node, link, and other configuration data is sent to the backend. This state is also reached when the user customizes a service. Instantiation After configuration data has been sent, just before the nodes are created. Runtime All nodes and networks have been built and are running. (This is the same state at which the previously-named global experiment script was run.) Datacollect The user has pressed the Stop button, but before services have been stopped and nodes have been shut down. This is a good time to collect log files and other data from the nodes. Shutdown All nodes and networks have been shut down and destroyed."},{"location":"gui.html#help-menu","title":"Help Menu","text":"Option Description CORE Github (www) Link to the CORE GitHub page. CORE Documentation (www) Lnk to the CORE Documentation page. About Invokes the About dialog box for viewing version information."},{"location":"gui.html#building-sample-networks","title":"Building Sample Networks","text":""},{"location":"gui.html#wired-networks","title":"Wired Networks","text":"

Wired networks are created using the Link Tool to draw a link between two nodes. This automatically draws a red line representing an Ethernet link and creates new interfaces on network-layer nodes.

Double-click on the link to invoke the link configuration dialog box. Here you can change the Bandwidth, Delay, Loss, and Duplicate rate parameters for that link. You can also modify the color and width of the link, affecting its display.

Link-layer nodes are provided for modeling wired networks. These do not create a separate network stack when instantiated, but are implemented using Linux bridging. These are the hub, switch, and wireless LAN nodes. The hub copies each packet from the incoming link to every connected link, while the switch behaves more like an Ethernet switch and keeps track of the Ethernet address of the connected peer, forwarding unicast traffic only to the appropriate ports.

The wireless LAN (WLAN) is covered in the next section.

"},{"location":"gui.html#wireless-networks","title":"Wireless Networks","text":"

Wireless networks allow moving nodes around to impact the connectivity between them. Connections between a pair of nodes is stronger when the nodes are closer while connection is weaker when the nodes are further away. CORE offers several levels of wireless emulation fidelity, depending on modeling needs and available hardware.

  • WLAN Node
    • uses set bandwidth, delay, and loss
    • links are enabled or disabled based on a set range
    • uses the least CPU when moving, but nothing extra when not moving
  • Wireless Node
    • uses set bandwidth, delay, and initial loss
    • loss dynamically changes based on distance between nodes, which can be configured with range parameters
    • links are enabled or disabled based on a set range
    • uses more CPU to calculate loss for every movement, but nothing extra when not moving
  • EMANE Node
    • uses a physical layer model to account for signal propagation, antenna profile effects and interference sources in order to provide a realistic environment for wireless experimentation
    • uses the most CPU for every packet, as complex calculations are used for fidelity
    • See Wiki for details on general EMANE usage
    • See CORE EMANE for details on using EMANE in CORE
Model Type Supported Platform(s) Fidelity Description WLAN On/Off Linux Low Ethernet bridging with nftables Wireless On/Off Linux Medium Ethernet bridging with nftables EMANE RF Linux High TAP device connected to EMANE emulator with pluggable MAC and PHY radio types"},{"location":"gui.html#example-wlan-network-setup","title":"Example WLAN Network Setup","text":"

To quickly build a wireless network, you can first place several router nodes onto the canvas. If you have the Quagga MDR software installed, it is recommended that you use the mdr node type for reduced routing overhead. Next choose the WLAN from the Link-layer nodes submenu. First set the desired WLAN parameters by double-clicking the cloud icon. Then you can link all selected right-clicking on the WLAN and choosing Link to Selected.

Linking a router to the WLAN causes a small antenna to appear, but no red link line is drawn. Routers can have multiple wireless links and both wireless and wired links (however, you will need to manually configure route redistribution.) The mdr node type will generate a routing configuration that enables OSPFv3 with MANET extensions. This is a Boeing-developed extension to Quagga's OSPFv3 that reduces flooding overhead and optimizes the flooding procedure for mobile ad-hoc (MANET) networks.

The default configuration of the WLAN is set to use the basic range model. Having this model selected causes core-daemon to calculate the distance between nodes based on screen pixels. A numeric range in screen pixels is set for the wireless network using the Range slider. When two wireless nodes are within range of each other, a green line is drawn between them and they are linked. Two wireless nodes that are farther than the range pixels apart are not linked. During Execute mode, users may move wireless nodes around by clicking and dragging them, and wireless links will be dynamically made or broken.

"},{"location":"gui.html#running-commands-within-nodes","title":"Running Commands within Nodes","text":"

You can double click a node to bring up a terminal for running shell commands. Within the terminal you can run anything you like and those commands will be run in context of the node. For standard CORE nodes, the only thing to keep in mind is that you are using the host file system and anything you change or do can impact the greater system. By default, your terminal will open within the nodes home directory for the running session, but it is temporary and will be removed when the session is stopped.

You can also launch GUI based applications from within standard CORE nodes, but you need to enable xhost access to root.

xhost +local:root\n
"},{"location":"gui.html#mobility-scripting","title":"Mobility Scripting","text":"

CORE has a few ways to script mobility.

Option Description ns-2 script The script specifies either absolute positions or waypoints with a velocity. Locations are given with Cartesian coordinates. gRPC API An external entity can move nodes by leveraging the gRPC API EMANE events See EMANE for details on using EMANE scripts to move nodes around. Location information is typically given as latitude, longitude, and altitude.

For the first method, you can create a mobility script using a text editor, or using a tool such as BonnMotion, and associate the script with one of the wireless using the WLAN configuration dialog box. Click the ns-2 mobility script... button, and set the mobility script file field in the resulting ns2script configuration dialog.

Here is an example for creating a BonnMotion script for 10 nodes:

bm -f sample RandomWaypoint -n 10 -d 60 -x 1000 -y 750\nbm NSFile -f sample\n# use the resulting 'sample.ns_movements' file in CORE\n

When the Execute mode is started and one of the WLAN nodes has a mobility script, a mobility script window will appear. This window contains controls for starting, stopping, and resetting the running time for the mobility script. The loop checkbox causes the script to play continuously. The resolution text box contains the number of milliseconds between each timer event; lower values cause the mobility to appear smoother but consumes greater CPU time.

The format of an ns-2 mobility script looks like:

# nodes: 3, max time: 35.000000, max x: 600.00, max y: 600.00\n$node_(2) set X_ 144.0\n$node_(2) set Y_ 240.0\n$node_(2) set Z_ 0.00\n$ns_ at 1.00 \"$node_(2) setdest 130.0 280.0 15.0\"\n

The first three lines set an initial position for node 2. The last line in the above example causes node 2 to move towards the destination (130, 280) at speed 15. All units are screen coordinates, with speed in units per second. The total script time is learned after all nodes have reached their waypoints. Initially, the time slider in the mobility script dialog will not be accurate.

Examples mobility scripts (and their associated topology files) can be found in the configs/ directory.

"},{"location":"gui.html#alerts","title":"Alerts","text":"

The alerts button is located in the bottom right-hand corner of the status bar in the CORE GUI. This will change colors to indicate one or more problems with the running emulation. Clicking on the alerts button will invoke the alerts dialog.

The alerts dialog contains a list of alerts received from the CORE daemon. An alert has a time, severity level, optional node number, and source. When the alerts button is red, this indicates one or more fatal exceptions. An alert with a fatal severity level indicates that one or more of the basic pieces of emulation could not be created, such as failure to create a bridge or namespace, or the failure to launch EMANE processes for an EMANE-based network.

Clicking on an alert displays details for that exceptio. The exception source is a text string to help trace where the exception occurred; \"service:UserDefined\" for example, would appear for a failed validation command with the UserDefined service.

A button is available at the bottom of the dialog for clearing the exception list.

"},{"location":"gui.html#customizing-your-topologys-look","title":"Customizing your Topology's Look","text":"

Several annotation tools are provided for changing the way your topology is presented. Captions may be added with the Text tool. Ovals and rectangles may be drawn in the background, helpful for visually grouping nodes together.

During live demonstrations the marker tool may be helpful for drawing temporary annotations on the canvas that may be quickly erased. A size and color palette appears at the bottom of the toolbar when the marker tool is selected. Markings are only temporary and are not saved in the topology file.

The basic node icons can be replaced with a custom image of your choice. Icons appear best when they use the GIF or PNG format with a transparent background. To change a node's icon, double-click the node to invoke its configuration dialog and click on the button to the right of the node name that shows the node's current icon.

A background image for the canvas may be set using the Wallpaper... option from the Canvas menu. The image may be centered, tiled, or scaled to fit the canvas size. An existing terrain, map, or network diagram could be used as a background, for example, with CORE nodes drawn on top.

"},{"location":"hitl.html","title":"Hardware In The Loop","text":""},{"location":"hitl.html#overview","title":"Overview","text":"

In some cases it may be impossible or impractical to run software using CORE nodes alone. You may need to bring in external hardware into the network. CORE's emulated networks run in real time, so they can be connected to live physical networks. The RJ45 tool and the Tunnel tool help with connecting to the real world. These tools are available from the Link Layer Nodes menu.

When connecting two or more CORE emulations together, MAC address collisions should be avoided. CORE automatically assigns MAC addresses to interfaces when the emulation is started, starting with 00:00:00:aa:00:00 and incrementing the bottom byte. The starting byte should be changed on the second CORE machine using the Tools->MAC Addresses option the menu.

"},{"location":"hitl.html#rj45-node","title":"RJ45 Node","text":"

CORE provides the RJ45 node, which represents a physical interface within the host that is running CORE. Any real-world network devices can be connected to the interface and communicate with the CORE nodes in real time.

The main drawback is that one physical interface is required for each connection. When the physical interface is assigned to CORE, it may not be used for anything else. Another consideration is that the computer or network that you are connecting to must be co-located with the CORE machine.

"},{"location":"hitl.html#gui-usage","title":"GUI Usage","text":"

To place an RJ45 connection, click on the Link Layer Nodes toolbar and select the RJ45 Node from the options. Click on the canvas, where you would like the nodes to place. Now click on the Link Tool and draw a link between the RJ45 and the other node you wish to be connected to. The RJ45 node will display \"UNASSIGNED\". Double-click the RJ45 node to assign a physical interface. A list of available interfaces will be shown, and one may be selected, then selecting Apply.

Note

When you press the Start button to instantiate your topology, the interface assigned to the RJ45 will be connected to the CORE topology. The interface can no longer be used by the system.

"},{"location":"hitl.html#multiple-rj45s-with-one-interface-vlan","title":"Multiple RJ45s with One Interface (VLAN)","text":"

It is possible to have multiple RJ45 nodes using the same physical interface by leveraging 802.1x VLANs. This allows for more RJ45 nodes than physical ports are available, but the (e.g. switching) hardware connected to the physical port must support the VLAN tagging, and the available bandwidth will be shared.

You need to create separate VLAN virtual devices on the Linux host, and then assign these devices to RJ45 nodes inside of CORE. The VLANing is actually performed outside of CORE, so when the CORE emulated node receives a packet, the VLAN tag will already be removed.

Here are example commands for creating VLAN devices under Linux:

ip link add link eth0 name eth0.1 type vlan id 1\nip link add link eth0 name eth0.2 type vlan id 2\nip link add link eth0 name eth0.3 type vlan id 3\n
"},{"location":"hitl.html#tunnel-tool","title":"Tunnel Tool","text":"

The tunnel tool builds GRE tunnels between CORE emulations or other hosts. Tunneling can be helpful when the number of physical interfaces is limited or when the peer is located on a different network. In this case a physical interface does not need to be dedicated to CORE as with the RJ45 tool.

The peer GRE tunnel endpoint may be another CORE machine or another host that supports GRE tunneling. When placing a Tunnel node, initially the node will display \"UNASSIGNED\". This text should be replaced with the IP address of the tunnel peer. This is the IP address of the other CORE machine or physical machine, not an IP address of another virtual node.

Note

Be aware of possible MTU (Maximum Transmission Unit) issues with GRE devices. The gretap device has an interface MTU of 1,458 bytes; when joined to a Linux bridge, the bridge's MTU becomes 1,458 bytes. The Linux bridge will not perform fragmentation for large packets if other bridge ports have a higher MTU such as 1,500 bytes.

The GRE key is used to identify flows with GRE tunneling. This allows multiple GRE tunnels to exist between that same pair of tunnel peers. A unique number should be used when multiple tunnels are used with the same peer. When configuring the peer side of the tunnel, ensure that the matching keys are used.

"},{"location":"hitl.html#example-usage","title":"Example Usage","text":"

Here are example commands for building the other end of a tunnel on a Linux machine. In this example, a router in CORE has the virtual address 10.0.0.1/24 and the CORE host machine has the (real) address 198.51.100.34/24. The Linux box that will connect with the CORE machine is reachable over the (real) network at 198.51.100.76/24. The emulated router is linked with the Tunnel Node. In the Tunnel Node configuration dialog, the address 198.51.100.76 is entered, with the key set to 1. The gretap interface on the Linux box will be assigned an address from the subnet of the virtual router node, 10.0.0.2/24.

# these commands are run on the tunnel peer\nsudo ip link add gt0 type gretap remote 198.51.100.34 local 198.51.100.76 key 1\nsudo ip addr add 10.0.0.2/24 dev gt0\nsudo ip link set dev gt0 up\n

Now the virtual router should be able to ping the Linux machine:

# from the CORE router node\nping 10.0.0.2\n

And the Linux machine should be able to ping inside the CORE emulation:

# from the tunnel peer\nping 10.0.0.1\n

To debug this configuration, tcpdump can be run on the gretap devices, or on the physical interfaces on the CORE or Linux machines. Make sure that a firewall is not blocking the GRE traffic.

"},{"location":"install.html","title":"Installation","text":"

Warning

If Docker is installed, the default iptable rules will block CORE traffic

"},{"location":"install.html#overview","title":"Overview","text":"

CORE currently supports and provides the following installation options, with the package option being preferred.

  • Package based install (rpm/deb)
  • Script based install
  • Dockerfile based install
"},{"location":"install.html#requirements","title":"Requirements","text":"

Any computer capable of running Linux should be able to run CORE. Since the physical machine will be hosting numerous containers, as a general rule you should select a machine having as much RAM and CPU resources as possible.

  • Linux Kernel v3.3+
  • iproute2 4.5+ is a requirement for bridge related commands
  • nftables compatible kernel and nft command line tool
"},{"location":"install.html#supported-linux-distributions","title":"Supported Linux Distributions","text":"

Plan is to support recent Ubuntu and CentOS LTS releases.

Verified:

  • Ubuntu - 18.04, 20.04, 22.04
  • CentOS - 7.8
"},{"location":"install.html#files","title":"Files","text":"

The following is a list of files that would be installed after installation.

  • executables
    • <prefix>/bin/{vcmd, vnode}
    • can be adjusted using script based install , package will be /usr
  • python files
    • virtual environment /opt/core/venv
    • local install will be local to the python version used
      • python3 -c \"import core; print(core.__file__)\"
    • scripts {core-daemon, core-cleanup, etc}
      • virtualenv /opt/core/venv/bin
      • local /usr/local/bin
  • configuration files
    • /etc/core/{core.conf, logging.conf}
  • ospf mdr repository files when using script based install
    • <repo>/../ospf-mdr
"},{"location":"install.html#installed-scripts","title":"Installed Scripts","text":"

The following python scripts are provided.

Name Description core-cleanup tool to help removed lingering core created containers, bridges, directories core-cli tool to query, open xml files, and send commands using gRPC core-daemon runs the backed core server providing a gRPC API core-gui starts GUI core-python provides a convenience for running the core python virtual environment core-route-monitor tool to help monitor traffic across nodes and feed that to SDT core-service-update tool to update automate modifying a legacy service to match current naming"},{"location":"install.html#upgrading-from-older-release","title":"Upgrading from Older Release","text":"

Please make sure to uninstall any previous installations of CORE cleanly before proceeding to install.

Clearing out a current install from 7.0.0+, making sure to provide options used for install (-l or -p).

cd <CORE_REPO>\ninv uninstall <options>\n

Previous install was built from source for CORE release older than 7.0.0:

cd <CORE_REPO>\nsudo make uninstall\nmake clean\n./bootstrap.sh clean\n

Installed from previously built packages:

# centos\nsudo yum remove core\n# ubuntu\nsudo apt remove core\n
"},{"location":"install.html#installation-examples","title":"Installation Examples","text":"

The below links will take you to sections providing complete examples for installing CORE and related utilities on fresh installations. Otherwise, a breakdown for installing different components and the options available are detailed below.

  • Ubuntu 22.04
  • CentOS 7
"},{"location":"install.html#package-based-install","title":"Package Based Install","text":"

Starting with 9.0.0 there are pre-built rpm/deb packages. You can retrieve the rpm/deb package from releases page.

The built packages will require and install system level dependencies, as well as running a post install script to install the provided CORE python wheel. A similar uninstall script is ran when uninstalling and would require the same options as given, during the install.

Note

PYTHON defaults to python3 for installs below, CORE requires python3.9+, pip, tk compatibility for python gui, and venv for virtual environments

Examples for install:

# recommended to upgrade to the latest version of pip before installation\n# in python, can help avoid building from source issues\nsudo <python> -m pip install --upgrade pip\n# install vcmd/vnoded, system dependencies,\n# and core python into a venv located at /opt/core/venv\nsudo <yum/apt> install -y ./<package>\n# disable the venv and install to python directly\nsudo NO_VENV=1 <yum/apt> install -y ./<package>\n# change python executable used to install for venv or direct installations\nsudo PYTHON=python3.9 <yum/apt> install -y ./<package>\n# disable venv and change python executable\nsudo NO_VENV=1 PYTHON=python3.9 <yum/apt> install -y ./<package>\n# skip installing the python portion entirely, as you plan to carry this out yourself\n# core python wheel is located at /opt/core/core-<version>-py3-none-any.whl\nsudo NO_PYTHON=1 <yum/apt> install -y ./<package>\n# install python wheel into python of your choosing\nsudo <python> -m pip install /opt/core/core-<version>-py3-none-any.whl\n

Example for removal, requires using the same options as install:

# remove a standard install\nsudo <yum/apt> remove core\n# remove a local install\nsudo NO_VENV=1 <yum/apt> remove core\n# remove install using alternative python\nsudo PYTHON=python3.9 <yum/apt> remove core\n# remove install using alternative python and local install\nsudo NO_VENV=1 PYTHON=python3.9 <yum/apt> remove core\n# remove install and skip python uninstall\nsudo NO_PYTHON=1 <yum/apt> remove core\n
"},{"location":"install.html#installing-ospf-mdr","title":"Installing OSPF MDR","text":"

You will need to manually install OSPF MDR for routing nodes, since this is not provided by the package.

git clone https://github.com/USNavalResearchLaboratory/ospf-mdr.git\ncd ospf-mdr\n./bootstrap.sh\n./configure --disable-doc --enable-user=root --enable-group=root \\\n--with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh \\\n--localstatedir=/var/run/quagga\nmake -j$(nproc)\nsudo make install\n

When done see Post Install.

"},{"location":"install.html#script-based-install","title":"Script Based Install","text":"

The script based installation will install system level dependencies, python library and dependencies, as well as dependencies for building CORE.

The script based install also automatically builds and installs OSPF MDR, used by default on routing nodes. This can optionally be skipped.

Installaion will carry out the following steps:

  • installs system dependencies for building core
  • builds vcmd/vnoded and python grpc files
  • installs core into poetry managed virtual environment or locally, if flag is passed
  • installs systemd service pointing to appropriate python location based on install type
  • clone/build/install working version of OPSF MDR

Note

Installing locally comes with its own risks, it can result it potential dependency conflicts with system package manager installed python dependencies

Note

Provide a prefix that will be found on path when running as sudo, if the default prefix /usr/local will not be valid

The following tools will be leveraged during installation:

Tool Description pip used to install pipx pipx used to install standalone python tools (invoke, poetry) invoke used to run provided tasks (install, uninstall, reinstall, etc) poetry used to install python virtual environment or building a python wheel

First we will need to clone and navigate to the CORE repo.

# clone CORE repo\ngit clone https://github.com/coreemu/core.git\ncd core\n\n# install dependencies to run installation task\n./setup.sh\n# skip installing system packages, due to using python built from source\nNO_SYSTEM=1 ./setup.sh\n\n# run the following or open a new terminal\nsource ~/.bashrc\n\n# Ubuntu\ninv install\n# CentOS\ninv install -p /usr\n# optionally skip python system packages\ninv install --no-python\n# optionally skip installing ospf mdr\ninv install --no-ospf\n\n# install command options\nUsage: inv[oke] [--core-opts] install [--options] [other tasks here ...]\n\nDocstring:\n  install core, poetry, scripts, service, and ospf mdr\n\nOptions:\n  -d, --dev                          install development mode\n  -i STRING, --install-type=STRING   used to force an install type, can be one of the following (redhat, debian)\n-l, --local                        determines if core will install to local system, default is False\n  -n, --no-python                    avoid installing python system dependencies\n  -o, --[no-]ospf                    disable ospf installation\n  -p STRING, --prefix=STRING         prefix where scripts are installed, default is /usr/local\n  -v, --verbose\n

When done see Post Install.

"},{"location":"install.html#unsupported-linux-distribution","title":"Unsupported Linux Distribution","text":"

For unsupported OSs you could attempt to do the following to translate an installation to your use case.

  • make sure you have python3.9+ with venv support
  • make sure you have python3 invoke available to leverage <repo>/tasks.py
# this will print the commands that would be ran for a given installation\n# type without actually running them, they may help in being used as\n# the basis for translating to your OS\ninv install --dry -v -p <prefix> -i <install type>\n
"},{"location":"install.html#dockerfile-based-install","title":"Dockerfile Based Install","text":"

You can leverage one of the provided Dockerfiles, to run and launch CORE within a Docker container.

Since CORE nodes will leverage software available within the system for a given use case, make sure to update and build the Dockerfile with desired software.

# clone core\ngit clone https://github.com/coreemu/core.git\ncd core\n# build image\nsudo docker build -t core -f dockerfiles/Dockerfile.<centos,ubuntu> .\n# start container\nsudo docker run -itd --name core -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --privileged core\n# enable xhost access to the root user\nxhost +local:root\n# launch core-gui\nsudo docker exec -it core core-gui\n

When done see Post Install.

"},{"location":"install.html#installing-emane","title":"Installing EMANE","text":"

Note

Installing EMANE for the virtual environment is known to work for 1.21+

The recommended way to install EMANE is using prebuilt packages, otherwise you can follow their instructions for installing from source. Installation information can be found here.

There is an invoke task to help install the EMANE bindings into the CORE virtual environment, when needed. An example for running the task is below and the version provided should match the version of the packages installed.

You will also need to make sure, you are providing the correct python binary where CORE is being used.

Also, these EMANE bindings need to be built using protoc 3.19+. So make sure that is available and being picked up on PATH properly.

Examples for building and installing EMANE python bindings for use in CORE:

# if your system does not have protoc 3.19+\nwget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip\nmkdir protoc\nunzip protoc-3.19.6-linux-x86_64.zip -d protoc\ngit clone https://github.com/adjacentlink/emane.git\ncd emane\ngit checkout v1.3.3\n./autogen.sh\nPYTHON=/opt/core/venv/bin/python ./configure --prefix=/usr\ncd src/python\nPATH=/opt/protoc/bin:$PATH make\n/opt/core/venv/bin/python -m pip install .\n\n# when your system has protoc 3.19+\ncd <CORE_REPO>\n# example version tag v1.3.3\n# overriding python used to leverage the default virtualenv install\nPYTHON=/opt/core/venv/bin/python inv install-emane -e <version tag>\n# local install that uses whatever python3 refers to\ninv install-emane -e <version tag>\n
"},{"location":"install.html#post-install","title":"Post Install","text":"

After installation completes you are now ready to run CORE.

"},{"location":"install.html#resolving-docker-issues","title":"Resolving Docker Issues","text":"

If you have Docker installed, by default it will change the iptables forwarding chain to drop packets, which will cause issues for CORE traffic.

You can temporarily resolve the issue with the following command:

sudo iptables --policy FORWARD ACCEPT\n

Alternatively, you can configure Docker to avoid doing this, but will likely break normal Docker networking usage. Using the setting below will require a restart.

Place the file contents below in /etc/docker/docker.json

{\n\"iptables\": false\n}\n
"},{"location":"install.html#resolving-path-issues","title":"Resolving Path Issues","text":"

One problem running CORE you may run into, using the virtual environment or locally can be issues related to your path.

To add support for your user to run scripts from the virtual environment:

# can add to ~/.bashrc\nexport PATH=$PATH:/opt/core/venv/bin\n

This will not solve the path issue when running as sudo, so you can do either of the following to compensate.

# run command passing in the right PATH to pickup from the user running the command\nsudo env PATH=$PATH core-daemon\n\n# add an alias to ~/.bashrc or something similar\nalias sudop='sudo env PATH=$PATH'\n# now you can run commands like so\nsudop core-daemon\n
"},{"location":"install.html#running-core","title":"Running CORE","text":"

The following assumes I have resolved PATH issues and setup the sudop alias.

# in one terminal run the server daemon using the alias above\nsudop core-daemon\n# in another terminal run the gui client\ncore-gui\n
"},{"location":"install.html#enabling-service","title":"Enabling Service","text":"

After installation, the core service is not enabled by default. If you desire to use the service, run the following commands.

sudo systemctl enable core-daemon\nsudo systemctl start core-daemon\n
"},{"location":"install_centos.html","title":"Install CentOS","text":""},{"location":"install_centos.html#overview","title":"Overview","text":"

Below is a detailed path for installing CORE and related tooling on a fresh CentOS 7 install. Both of the examples below will install CORE into its own virtual environment located at /opt/core/venv. Both examples below also assume using ~/Documents as the working directory.

"},{"location":"install_centos.html#script-install","title":"Script Install","text":"

This section covers step by step commands that can be used to install CORE using the script based installation path.

# install system packages\nsudo yum -y update\nsudo yum install -y git sudo wget tzdata unzip libpcap-devel libpcre3-devel \\\nlibxml2-devel protobuf-devel unzip uuid-devel tcpdump make epel-release\nsudo yum-builddep -y python3\n\n# install python3.9\ncd ~/Documents\nwget https://www.python.org/ftp/python/3.9.15/Python-3.9.15.tgz\ntar xf Python-3.9.15.tgz\ncd Python-3.9.15\n./configure --enable-optimizations --with-ensurepip=install\nsudo make -j$(nproc) altinstall\npython3.9 -m pip install --upgrade pip\n\n# install core\ncd ~/Documents\ngit clone https://github.com/coreemu/core\ncd core\nNO_SYSTEM=1 PYTHON=/usr/local/bin/python3.9 ./setup.sh\nsource ~/.bashrc\nPYTHON=python3.9 inv install -p /usr --no-python\n\n# install emane\ncd ~/Documents\nwget -q https://adjacentlink.com/downloads/emane/emane-1.3.3-release-1.el7.x86_64.tar.gz\ntar xf emane-1.3.3-release-1.el7.x86_64.tar.gz\ncd emane-1.3.3-release-1/rpms/el7/x86_64\nsudo yum install -y ./openstatistic*.rpm ./emane*.rpm ./python3-emane_*.rpm\n\n# install emane python bindings into CORE virtual environment\ncd ~/Documents\nwget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip\nmkdir protoc\nunzip protoc-3.19.6-linux-x86_64.zip -d protoc\ngit clone https://github.com/adjacentlink/emane.git\ncd emane\ngit checkout v1.3.3\n./autogen.sh\nPYTHON=/opt/core/venv/bin/python ./configure --prefix=/usr\ncd src/python\nPATH=~/Documents/protoc/bin:$PATH make\nsudo /opt/core/venv/bin/python -m pip install .\n
"},{"location":"install_centos.html#package-install","title":"Package Install","text":"

This section covers step by step commands that can be used to install CORE using the package based installation path. This will require downloading a package from the release page, to use during the install CORE step below.

# install system packages\nsudo yum -y update\nsudo yum install -y git sudo wget tzdata unzip libpcap-devel libpcre3-devel libxml2-devel \\\nprotobuf-devel unzip uuid-devel tcpdump automake gawk libreadline-devel libtool \\\npkg-config make\nsudo yum-builddep -y python3\n\n# install python3.9\ncd ~/Documents\nwget https://www.python.org/ftp/python/3.9.15/Python-3.9.15.tgz\ntar xf Python-3.9.15.tgz\ncd Python-3.9.15\n./configure --enable-optimizations --with-ensurepip=install\nsudo make -j$(nproc) altinstall\npython3.9 -m pip install --upgrade pip\n\n# install core\ncd ~/Documents\nsudo PYTHON=python3.9 yum install -y ./core_*.rpm\n\n# install ospf mdr\ncd ~/Documents\ngit clone https://github.com/USNavalResearchLaboratory/ospf-mdr.git\ncd ospf-mdr\n./bootstrap.sh\n./configure --disable-doc --enable-user=root --enable-group=root \\\n--with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh \\\n--localstatedir=/var/run/quagga\nmake -j$(nproc)\nsudo make install\n\n# install emane\ncd ~/Documents\nwget -q https://adjacentlink.com/downloads/emane/emane-1.3.3-release-1.el7.x86_64.tar.gz\ntar xf emane-1.3.3-release-1.el7.x86_64.tar.gz\ncd emane-1.3.3-release-1/rpms/el7/x86_64\nsudo yum install -y ./openstatistic*.rpm ./emane*.rpm ./python3-emane_*.rpm\n\n# install emane python bindings into CORE virtual environment\ncd ~/Documents\nwget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip\nmkdir protoc\nunzip protoc-3.19.6-linux-x86_64.zip -d protoc\ngit clone https://github.com/adjacentlink/emane.git\ncd emane\ngit checkout v1.3.3\n./autogen.sh\nPYTHON=/opt/core/venv/bin/python ./configure --prefix=/usr\ncd src/python\nPATH=~/Documents/protoc/bin:$PATH make\nsudo /opt/core/venv/bin/python -m pip install .\n
"},{"location":"install_centos.html#setup-path","title":"Setup PATH","text":"

The CORE virtual environment and related scripts will not be found on your PATH, so some adjustments needs to be made.

To add support for your user to run scripts from the virtual environment:

# can add to ~/.bashrc\nexport PATH=$PATH:/opt/core/venv/bin\n

This will not solve the path issue when running as sudo, so you can do either of the following to compensate.

# run command passing in the right PATH to pickup from the user running the command\nsudo env PATH=$PATH core-daemon\n\n# add an alias to ~/.bashrc or something similar\nalias sudop='sudo env PATH=$PATH'\n# now you can run commands like so\nsudop core-daemon\n
"},{"location":"install_ubuntu.html","title":"Install Ubuntu","text":""},{"location":"install_ubuntu.html#overview","title":"Overview","text":"

Below is a detailed path for installing CORE and related tooling on a fresh Ubuntu 22.04 installation. Both of the examples below will install CORE into its own virtual environment located at /opt/core/venv. Both examples below also assume using ~/Documents as the working directory.

"},{"location":"install_ubuntu.html#script-install","title":"Script Install","text":"

This section covers step by step commands that can be used to install CORE using the script based installation path.

# install system packages\nsudo apt-get update -y\nsudo apt-get install -y ca-certificates git sudo wget tzdata libpcap-dev libpcre3-dev \\\nlibprotobuf-dev libxml2-dev protobuf-compiler unzip uuid-dev iproute2 iputils-ping \\\ntcpdump\n\n# install core\ncd ~/Documents\ngit clone https://github.com/coreemu/core\ncd core\n./setup.sh\nsource ~/.bashrc\ninv install\n\n# install emane\ncd ~/Documents\nwget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip\nmkdir protoc\nunzip protoc-3.19.6-linux-x86_64.zip -d protoc\ngit clone https://github.com/adjacentlink/emane.git\ncd emane\n./autogen.sh\n./configure --prefix=/usr\nmake -j$(nproc)\nsudo make install\ncd src/python\nmake clean\nPATH=~/Documents/protoc/bin:$PATH make\nsudo /opt/core/venv/bin/python -m pip install .\n
"},{"location":"install_ubuntu.html#package-install","title":"Package Install","text":"

This section covers step by step commands that can be used to install CORE using the package based installation path. This will require downloading a package from the release page, to use during the install CORE step below.

# install system packages\nsudo apt-get update -y\nsudo apt-get install -y ca-certificates python3 python3-tk python3-pip python3-venv \\\nlibpcap-dev libpcre3-dev libprotobuf-dev libxml2-dev protobuf-compiler unzip \\\nuuid-dev automake gawk git wget libreadline-dev libtool pkg-config g++ make \\\niputils-ping tcpdump\n\n# install core\ncd ~/Documents\nsudo apt-get install -y ./core_*.deb\n\n# install ospf mdr\ncd ~/Documents\ngit clone https://github.com/USNavalResearchLaboratory/ospf-mdr.git\ncd ospf-mdr\n./bootstrap.sh\n./configure --disable-doc --enable-user=root --enable-group=root \\\n--with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh \\\n--localstatedir=/var/run/quagga\nmake -j$(nproc)\nsudo make install\n\n# install emane\ncd ~/Documents\nwget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip\nmkdir protoc\nunzip protoc-3.19.6-linux-x86_64.zip -d protoc\ngit clone https://github.com/adjacentlink/emane.git\ncd emane\n./autogen.sh\n./configure --prefix=/usr\nmake -j$(nproc)\nsudo make install\ncd src/python\nmake clean\nPATH=~/Documents/protoc/bin:$PATH make\nsudo /opt/core/venv/bin/python -m pip install .\n
"},{"location":"install_ubuntu.html#setup-path","title":"Setup PATH","text":"

The CORE virtual environment and related scripts will not be found on your PATH, so some adjustments needs to be made.

To add support for your user to run scripts from the virtual environment:

# can add to ~/.bashrc\nexport PATH=$PATH:/opt/core/venv/bin\n

This will not solve the path issue when running as sudo, so you can do either of the following to compensate.

# run command passing in the right PATH to pickup from the user running the command\nsudo env PATH=$PATH core-daemon\n\n# add an alias to ~/.bashrc or something similar\nalias sudop='sudo env PATH=$PATH'\n# now you can run commands like so\nsudop core-daemon\n
"},{"location":"lxc.html","title":"LXC Support","text":""},{"location":"lxc.html#overview","title":"Overview","text":"

LXC nodes are provided by way of LXD to create nodes using predefined images and provide file system separation.

"},{"location":"lxc.html#installation","title":"Installation","text":""},{"location":"lxc.html#debian-systems","title":"Debian Systems","text":"
sudo snap install lxd\n
"},{"location":"lxc.html#configuration","title":"Configuration","text":"

Initialize LXD and say no to adding a default bridge.

sudo lxd init\n
"},{"location":"lxc.html#group-setup","title":"Group Setup","text":"

To use LXC nodes within the python GUI, you will need to make sure the user running the GUI is a member of the lxd group.

# add group if does not exist\nsudo groupadd lxd\n\n# add user to group\nsudo usermod -aG lxd $USER\n\n# to get this change to take effect, log out and back in or run the following\nnewgrp lxd\n
"},{"location":"lxc.html#tools-and-versions-tested-with","title":"Tools and Versions Tested With","text":"
  • LXD 3.14
  • nsenter from util-linux 2.31.1
"},{"location":"nodetypes.html","title":"Node Types","text":""},{"location":"nodetypes.html#overview","title":"Overview","text":"

Different node types can be used within CORE, each with their own tradeoffs and functionality.

"},{"location":"nodetypes.html#core-nodes","title":"CORE Nodes","text":"

CORE nodes are the standard node type typically used in CORE. They are backed by Linux network namespaces. They use very little system resources in order to emulate a network. They do however share the hosts file system as they do not get their own. CORE nodes will have a directory uniquely created for them as a place to keep their files and mounted directories (/tmp/pycore.<session id>/<node name.conf), which will usually be wiped and removed upon shutdown.

"},{"location":"nodetypes.html#docker-nodes","title":"Docker Nodes","text":"

Docker nodes provide a convenience for running nodes using predefind images and filesystems that CORE nodes do not provide. Details for using Docker nodes can be found here.

"},{"location":"nodetypes.html#lxc-nodes","title":"LXC Nodes","text":"

LXC nodes provide a convenience for running nodes using predefind images and filesystems that CORE nodes do not provide. Details for using LXC nodes can be found here.

"},{"location":"nodetypes.html#physical-nodes","title":"Physical Nodes","text":"

The physical machine type is used for nodes that represent a real Linux-based machine that will participate in the emulated network scenario. This is typically used, for example, to incorporate racks of server machines from an emulation testbed. A physical node is one that is running the CORE daemon (core-daemon), but will not be further partitioned into containers. Services that are run on the physical node do not run in an isolated environment, but directly on the operating system.

Physical nodes must be assigned to servers, the same way nodes are assigned to emulation servers with Distributed Emulation. The list of available physical nodes currently shares the same dialog box and list as the emulation servers, accessed using the Emulation Servers... entry from the Session menu.

Support for physical nodes is under development and may be improved in future releases. Currently, when any node is linked to a physical node, a dashed line is drawn to indicate network tunneling. A GRE tunneling interface will be created on the physical node and used to tunnel traffic to and from the emulated world.

Double-clicking on a physical node during runtime opens a terminal with an SSH shell to that node. Users should configure public-key SSH login as done with emulation servers.

"},{"location":"performance.html","title":"CORE Performance","text":""},{"location":"performance.html#overview","title":"Overview","text":"

The top question about the performance of CORE is often how many nodes can it handle? The answer depends on several factors:

Factor Performance Impact Hardware the number and speed of processors in the computer, the available processor cache, RAM memory, and front-side bus speed may greatly affect overall performance. Operating system version distribution of Linux and the specific kernel versions used will affect overall performance. Active processes all nodes share the same CPU resources, so if one or more nodes is performing a CPU-intensive task, overall performance will suffer. Network traffic the more packets that are sent around the virtual network increases the amount of CPU usage. GUI usage widgets that run periodically, mobility scenarios, and other GUI interactions generally consume CPU cycles that may be needed for emulation.

On a typical single-CPU Xeon 3.0GHz server machine with 2GB RAM running Linux, we have found it reasonable to run 30-75 nodes running OSPFv2 and OSPFv3 routing. On this hardware CORE can instantiate 100 or more nodes, but at that point it becomes critical as to what each of the nodes is doing.

Because this software is primarily a network emulator, the more appropriate question is how much network traffic can it handle? On the same 3.0GHz server described above, running Linux, about 300,000 packets-per-second can be pushed through the system. The number of hops and the size of the packets is less important. The limiting factor is the number of times that the operating system needs to handle a packet. The 300,000 pps figure represents the number of times the system as a whole needed to deal with a packet. As more network hops are added, this increases the number of context switches and decreases the throughput seen on the full length of the network path.

Note

The right question to be asking is \"how much traffic?\", not \"how many nodes?\".

For a more detailed study of performance in CORE, refer to the following publications:

  • J. Ahrenholz, T. Goff, and B. Adamson, Integration of the CORE and EMANE Network Emulators, Proceedings of the IEEE Military Communications Conference 2011, November 2011.
  • Ahrenholz, J., Comparison of CORE Network Emulation Platforms, Proceedings of the IEEE Military Communications Conference 2010, pp. 864-869, November 2010.
  • J. Ahrenholz, C. Danilov, T. Henderson, and J.H. Kim, CORE: A real-time network emulator, Proceedings of IEEE MILCOM Conference, 2008.
"},{"location":"python.html","title":"Python API","text":""},{"location":"python.html#overview","title":"Overview","text":"

Writing your own Python scripts offers a rich programming environment with complete control over all aspects of the emulation.

The scripts need to be ran with root privileges because they create new network namespaces. In general, a CORE Python script does not connect to the CORE daemon, in fact the core-daemon is just another Python script that uses the CORE Python modules and exchanges messages with the GUI.

"},{"location":"python.html#examples","title":"Examples","text":""},{"location":"python.html#node-models","title":"Node Models","text":"

When creating nodes of type core.nodes.base.CoreNode these are the default models and the services they map to.

  • mdr
    • zebra, OSPFv3MDR, IPForward
  • PC
    • DefaultRoute
  • router
    • zebra, OSPFv2, OSPFv3, IPForward
  • host
    • DefaultRoute, SSH
"},{"location":"python.html#interface-helper","title":"Interface Helper","text":"

There is an interface helper class that can be leveraged for convenience when creating interface data for nodes. Alternatively one can manually create a core.emulator.data.InterfaceData class instead with appropriate information.

Manually creating interface data:

from core.emulator.data import InterfaceData\n\n# id is optional and will set to the next available id\n# name is optional and will default to eth<id>\n# mac is optional and will result in a randomly generated mac\niface_data = InterfaceData(\n    id=0,\n    name=\"eth0\",\n    ip4=\"10.0.0.1\",\n    ip4_mask=24,\n    ip6=\"2001::\",\n    ip6_mask=64,\n)\n

Leveraging the interface prefixes helper class:

from core.emulator.data import IpPrefixes\n\nip_prefixes = IpPrefixes(ip4_prefix=\"10.0.0.0/24\", ip6_prefix=\"2001::/64\")\n# node is used to get an ip4/ip6 address indexed from within the above prefixes\n# name is optional and would default to eth<id>\n# mac is optional and will result in a randomly generated mac\niface_data = ip_prefixes.create_iface(\n    node=node, name=\"eth0\", mac=\"00:00:00:00:aa:00\"\n)\n
"},{"location":"python.html#listening-to-events","title":"Listening to Events","text":"

Various events that can occur within a session can be listened to.

Event types:

  • session - events for changes in session state and mobility start/stop/pause
  • node - events for node movements and icon changes
  • link - events for link configuration changes and wireless link add/delete
  • config - configuration events when legacy gui joins a session
  • exception - alert/error events
  • file - file events when the legacy gui joins a session
def event_listener(event):\n    print(event)\n\n\n# add an event listener to event type you want to listen to\n# each handler will receive an object unique to that type\nsession.event_handlers.append(event_listener)\nsession.exception_handlers.append(event_listener)\nsession.node_handlers.append(event_listener)\nsession.link_handlers.append(event_listener)\nsession.file_handlers.append(event_listener)\nsession.config_handlers.append(event_listener)\n
"},{"location":"python.html#configuring-links","title":"Configuring Links","text":"

Links can be configured at the time of creation or during runtime.

Currently supported configuration options:

  • bandwidth (bps)
  • delay (us)
  • dup (%)
  • jitter (us)
  • loss (%)
from core.emulator.data import LinkOptions\n\n# configuring when creating a link\noptions = LinkOptions(\n    bandwidth=54_000_000,\n    delay=5000,\n    dup=5,\n    loss=5.5,\n    jitter=0,\n)\nsession.add_link(n1_id, n2_id, iface1_data, iface2_data, options)\n\n# configuring during runtime\nsession.update_link(n1_id, n2_id, iface1_id, iface2_id, options)\n
"},{"location":"python.html#peer-to-peer-example","title":"Peer to Peer Example","text":"
# required imports\nfrom core.emulator.coreemu import CoreEmu\nfrom core.emulator.data import IpPrefixes\nfrom core.emulator.enumerations import EventTypes\nfrom core.nodes.base import CoreNode, Position\n\n# ip nerator for example\nip_prefixes = IpPrefixes(ip4_prefix=\"10.0.0.0/24\")\n\n# create emulator instance for creating sessions and utility methods\ncoreemu = CoreEmu()\nsession = coreemu.create_session()\n\n# must be in configuration state for nodes to start, when using \"node_add\" below\nsession.set_state(EventTypes.CONFIGURATION_STATE)\n\n# create nodes\nposition = Position(x=100, y=100)\nn1 = session.add_node(CoreNode, position=position)\nposition = Position(x=300, y=100)\nn2 = session.add_node(CoreNode, position=position)\n\n# link nodes together\niface1 = ip_prefixes.create_iface(n1)\niface2 = ip_prefixes.create_iface(n2)\nsession.add_link(n1.id, n2.id, iface1, iface2)\n\n# start session\nsession.instantiate()\n\n# do whatever you like here\ninput(\"press enter to shutdown\")\n\n# stop session\nsession.shutdown()\n
"},{"location":"python.html#switchhub-example","title":"Switch/Hub Example","text":"
# required imports\nfrom core.emulator.coreemu import CoreEmu\nfrom core.emulator.data import IpPrefixes\nfrom core.emulator.enumerations import EventTypes\nfrom core.nodes.base import CoreNode, Position\nfrom core.nodes.network import SwitchNode\n\n# ip nerator for example\nip_prefixes = IpPrefixes(ip4_prefix=\"10.0.0.0/24\")\n\n# create emulator instance for creating sessions and utility methods\ncoreemu = CoreEmu()\nsession = coreemu.create_session()\n\n# must be in configuration state for nodes to start, when using \"node_add\" below\nsession.set_state(EventTypes.CONFIGURATION_STATE)\n\n# create switch\nposition = Position(x=200, y=200)\nswitch = session.add_node(SwitchNode, position=position)\n\n# create nodes\nposition = Position(x=100, y=100)\nn1 = session.add_node(CoreNode, position=position)\nposition = Position(x=300, y=100)\nn2 = session.add_node(CoreNode, position=position)\n\n# link nodes to switch\niface1 = ip_prefixes.create_iface(n1)\nsession.add_link(n1.id, switch.id, iface1)\niface1 = ip_prefixes.create_iface(n2)\nsession.add_link(n2.id, switch.id, iface1)\n\n# start session\nsession.instantiate()\n\n# do whatever you like here\ninput(\"press enter to shutdown\")\n\n# stop session\nsession.shutdown()\n
"},{"location":"python.html#wlan-example","title":"WLAN Example","text":"
# required imports\nfrom core.emulator.coreemu import CoreEmu\nfrom core.emulator.data import IpPrefixes\nfrom core.emulator.enumerations import EventTypes\nfrom core.location.mobility import BasicRangeModel\nfrom core.nodes.base import CoreNode, Position\nfrom core.nodes.network import WlanNode\n\n# ip nerator for example\nip_prefixes = IpPrefixes(ip4_prefix=\"10.0.0.0/24\")\n\n# create emulator instance for creating sessions and utility methods\ncoreemu = CoreEmu()\nsession = coreemu.create_session()\n\n# must be in configuration state for nodes to start, when using \"node_add\" below\nsession.set_state(EventTypes.CONFIGURATION_STATE)\n\n# create wlan\nposition = Position(x=200, y=200)\nwlan = session.add_node(WlanNode, position=position)\n\n# create nodes\noptions = CoreNode.create_options()\noptions.model = \"mdr\"\nposition = Position(x=100, y=100)\nn1 = session.add_node(CoreNode, position=position, options=options)\nposition = Position(x=300, y=100)\nn2 = session.add_node(CoreNode, position=position, options=options)\n\n# configuring wlan\nsession.mobility.set_model_config(wlan.id, BasicRangeModel.name, {\n    \"range\": \"280\",\n    \"bandwidth\": \"55000000\",\n    \"delay\": \"6000\",\n    \"jitter\": \"5\",\n    \"error\": \"5\",\n})\n\n# link nodes to wlan\niface1 = ip_prefixes.create_iface(n1)\nsession.add_link(n1.id, wlan.id, iface1)\niface1 = ip_prefixes.create_iface(n2)\nsession.add_link(n2.id, wlan.id, iface1)\n\n# start session\nsession.instantiate()\n\n# do whatever you like here\ninput(\"press enter to shutdown\")\n\n# stop session\nsession.shutdown()\n
"},{"location":"python.html#emane-example","title":"EMANE Example","text":"

For EMANE you can import and use one of the existing models and use its name for configuration.

Current models:

  • core.emane.ieee80211abg.EmaneIeee80211abgModel
  • core.emane.rfpipe.EmaneRfPipeModel
  • core.emane.tdma.EmaneTdmaModel
  • core.emane.bypass.EmaneBypassModel

Their configurations options are driven dynamically from parsed EMANE manifest files from the installed version of EMANE.

Options and their purpose can be found at the EMANE Wiki.

If configuring EMANE global settings or model mac/phy specific settings, any value not provided will use the defaults. When no configuration is used, the defaults are used.

# required imports\nfrom core.emane.models.ieee80211abg import EmaneIeee80211abgModel\nfrom core.emane.nodes import EmaneNet\nfrom core.emulator.coreemu import CoreEmu\nfrom core.emulator.data import IpPrefixes\nfrom core.emulator.enumerations import EventTypes\nfrom core.nodes.base import CoreNode, Position\n\n# ip nerator for example\nip_prefixes = IpPrefixes(ip4_prefix=\"10.0.0.0/24\")\n\n# create emulator instance for creating sessions and utility methods\ncoreemu = CoreEmu()\nsession = coreemu.create_session()\n\n# location information is required to be set for emane\nsession.location.setrefgeo(47.57917, -122.13232, 2.0)\nsession.location.refscale = 150.0\n\n# must be in configuration state for nodes to start, when using \"node_add\" below\nsession.set_state(EventTypes.CONFIGURATION_STATE)\n\n# create emane\noptions = EmaneNet.create_options()\noptions.emane_model = EmaneIeee80211abgModel.name\nposition = Position(x=200, y=200)\nemane = session.add_node(EmaneNet, position=position, options=options)\n\n# create nodes\noptions = CoreNode.create_options()\noptions.model = \"mdr\"\nposition = Position(x=100, y=100)\nn1 = session.add_node(CoreNode, position=position, options=options)\nposition = Position(x=300, y=100)\nn2 = session.add_node(CoreNode, position=position, options=options)\n\n# configure general emane settings\nconfig = session.emane.get_configs()\nconfig.update({\n    \"eventservicettl\": \"2\"\n})\n\n# configure emane model settings\n# using a dict mapping currently support values as strings\nsession.emane.set_model_config(emane.id, EmaneIeee80211abgModel.name, {\n    \"unicastrate\": \"3\",\n})\n\n# link nodes to emane\niface1 = ip_prefixes.create_iface(n1)\nsession.add_link(n1.id, emane.id, iface1)\niface1 = ip_prefixes.create_iface(n2)\nsession.add_link(n2.id, emane.id, iface1)\n\n# start session\nsession.instantiate()\n\n# do whatever you like here\ninput(\"press enter to shutdown\")\n\n# stop session\nsession.shutdown()\n

EMANE Model Configuration:

from core import utils\n\n# standardized way to retrieve an appropriate config id\n# iface id can be omitted, to allow a general configuration for a model, per node\nconfig_id = utils.iface_config_id(node.id, iface_id)\n# set emane configuration for the config id\nsession.emane.set_config(config_id, EmaneIeee80211abgModel.name, {\n    \"unicastrate\": \"3\",\n})\n
"},{"location":"python.html#configuring-a-service","title":"Configuring a Service","text":"

Services help generate and run bash scripts on nodes for a given purpose.

Configuring the files of a service results in a specific hard coded script being generated, instead of the default scripts, that may leverage dynamic generation.

The following features can be configured for a service:

  • configs - files that will be generated
  • dirs - directories that will be mounted unique to the node
  • startup - commands to run start a service
  • validate - commands to run to validate a service
  • shutdown - commands to run to stop a service

Editing service properties:

# configure a service, for a node, for a given session\nsession.services.set_service(node_id, service_name)\nservice = session.services.get_service(node_id, service_name)\nservice.configs = (\"file1.sh\", \"file2.sh\")\nservice.dirs = (\"/etc/node\",)\nservice.startup = (\"bash file1.sh\",)\nservice.validate = ()\nservice.shutdown = ()\n

When editing a service file, it must be the name of config file that the service will generate.

Editing a service file:

# to edit the contents of a generated file you can specify\n# the service, the file name, and its contents\nsession.services.set_service_file(\n    node_id,\n    service_name,\n    file_name,\n    \"echo hello\",\n)\n
"},{"location":"python.html#file-examples","title":"File Examples","text":"

File versions of the network examples can be found here.

"},{"location":"python.html#executing-scripts-from-gui","title":"Executing Scripts from GUI","text":"

To execute a python script from a GUI you need have the following.

The builtin name check here to know it is being executed from the GUI, this can be avoided if your script does not use a name check.

if __name__ in [\"__main__\", \"__builtin__\"]:\n    main()\n

A script can add sessions to the core-daemon. A global coreemu variable is exposed to the script pointing to the CoreEmu object.

The example below has a fallback to a new CoreEmu object, in the case you would like to run the script standalone, outside of the core-daemon.

coreemu = globals().get(\"coreemu\") or CoreEmu()\nsession = coreemu.create_session()\n
"},{"location":"services.html","title":"Services (Deprecated)","text":""},{"location":"services.html#overview","title":"Overview","text":"

CORE uses the concept of services to specify what processes or scripts run on a node when it is started. Layer-3 nodes such as routers and PCs are defined by the services that they run.

Services may be customized for each node, or new custom services can be created. New node types can be created each having a different name, icon, and set of default services. Each service defines the per-node directories, configuration files, startup index, starting commands, validation commands, shutdown commands, and meta-data associated with a node.

Note

Network namespace nodes do not undergo the normal Linux boot process using the init, upstart, or systemd frameworks. These lightweight nodes use configured CORE services.

"},{"location":"services.html#available-services","title":"Available Services","text":"Service Group Services BIRD BGP, OSPF, RADV, RIP, Static EMANE Transport Service FRR BABEL, BGP, OSPFv2, OSPFv3, PIMD, RIP, RIPNG, Zebra NRL arouted, MGEN Sink, MGEN Actor, NHDP, OLSR, OLSRORG, OLSRv2, SMF Quagga BABEL, BGP, OSPFv2, OSPFv3, OSPFv3 MDR, RIP, RIPNG, XPIMD, Zebra SDN OVS, RYU Security Firewall, IPsec, NAT, VPN Client, VPN Server Utility ATD, Routing Utils, DHCP, FTP, IP Forward, PCAP, RADVD, SSF, UCARP XORP BGP, OLSR, OSPFv2, OSPFv3, PIMSM4, PIMSM6, RIP, RIPNG, Router Manager"},{"location":"services.html#node-types-and-default-services","title":"Node Types and Default Services","text":"

Here are the default node types and their services:

Node Type Services router zebra, OSFPv2, OSPFv3, and IPForward services for IGP link-state routing. host DefaultRoute and SSH services, representing an SSH server having a default route when connected directly to a router. PC DefaultRoute service for having a default route when connected directly to a router. mdr zebra, OSPFv3MDR, and IPForward services for wireless-optimized MANET Designated Router routing. prouter a physical router, having the same default services as the router node type; for incorporating Linux testbed machines into an emulation.

Configuration files can be automatically generated by each service. For example, CORE automatically generates routing protocol configuration for the router nodes in order to simplify the creation of virtual networks.

To change the services associated with a node, double-click on the node to invoke its configuration dialog and click on the Services... button, or right-click a node a choose Services... from the menu. Services are enabled or disabled by clicking on their names. The button next to each service name allows you to customize all aspects of this service for this node. For example, special route redistribution commands could be inserted in to the Quagga routing configuration associated with the zebra service.

To change the default services associated with a node type, use the Node Types dialog available from the Edit button at the end of the Layer-3 nodes toolbar, or choose Node types... from the Session menu. Note that any new services selected are not applied to existing nodes if the nodes have been customized.

"},{"location":"services.html#customizing-a-service","title":"Customizing a Service","text":"

A service can be fully customized for a particular node. From the node's configuration dialog, click on the button next to the service name to invoke the service customization dialog for that service. The dialog has three tabs for configuring the different aspects of the service: files, directories, and startup/shutdown.

Note

A yellow customize icon next to a service indicates that service requires customization (e.g. the Firewall service). A green customize icon indicates that a custom configuration exists. Click the Defaults button when customizing a service to remove any customizations.

The Files tab is used to display or edit the configuration files or scripts that are used for this service. Files can be selected from a drop-down list, and their contents are displayed in a text entry below. The file contents are generated by the CORE daemon based on the network topology that exists at the time the customization dialog is invoked.

The Directories tab shows the per-node directories for this service. For the default types, CORE nodes share the same filesystem tree, except for these per-node directories that are defined by the services. For example, the /var/run/quagga directory needs to be unique for each node running the Zebra service, because Quagga running on each node needs to write separate PID files to that directory.

Note

The /var/log and /var/run directories are mounted uniquely per-node by default. Per-node mount targets can be found in /tmp/pycore./.conf/

The Startup/shutdown tab lists commands that are used to start and stop this service. The startup index allows configuring when this service starts relative to the other services enabled for this node; a service with a lower startup index value is started before those with higher values. Because shell scripts generated by the Files tab will not have execute permissions set, the startup commands should include the shell name, with something like sh script.sh.

Shutdown commands optionally terminate the process(es) associated with this service. Generally they send a kill signal to the running process using the kill or killall commands. If the service does not terminate the running processes using a shutdown command, the processes will be killed when the vnoded daemon is terminated (with kill -9) and the namespace destroyed. It is a good practice to specify shutdown commands, which will allow for proper process termination, and for run-time control of stopping and restarting services.

Validate commands are executed following the startup commands. A validate command can execute a process or script that should return zero if the service has started successfully, and have a non-zero return value for services that have had a problem starting. For example, the pidof command will check if a process is running and return zero when found. When a validate command produces a non-zero return value, an exception is generated, which will cause an error to be displayed in the Check Emulation Light.

Note

To start, stop, and restart services during run-time, right-click a node and use the Services... menu.

"},{"location":"services.html#new-services","title":"New Services","text":"

Services can save time required to configure nodes, especially if a number of nodes require similar configuration procedures. New services can be introduced to automate tasks.

"},{"location":"services.html#leveraging-userdefined","title":"Leveraging UserDefined","text":"

The easiest way to capture the configuration of a new process into a service is by using the UserDefined service. This is a blank service where any aspect may be customized. The UserDefined service is convenient for testing ideas for a service before adding a new service type.

"},{"location":"services.html#creating-new-services","title":"Creating New Services","text":"

Note

The directory name used in custom_services_dir below should be unique and should not correspond to any existing Python module name. For example, don't use the name subprocess or services.

  1. Modify the example service shown below to do what you want. It could generate config/script files, mount per-node directories, start processes/scripts, etc. sample.py is a Python file that defines one or more classes to be imported. You can create multiple Python files that will be imported.

  2. Put these files in a directory such as /home/<user>/.coregui/custom_services Note that the last component of this directory name custom_services should not be named the same as any python module, due to naming conflicts.

  3. Add a custom_services_dir = /home/<user>/.coregui/custom_services entry to the /etc/core/core.conf file.

  4. Restart the CORE daemon (core-daemon). Any import errors (Python syntax) should be displayed in the daemon output.

  5. Start using your custom service on your nodes. You can create a new node type that uses your service, or change the default services for an existing node type, or change individual nodes.

If you have created a new service type that may be useful to others, please consider contributing it to the CORE project.

"},{"location":"services.html#example-custom-service","title":"Example Custom Service","text":"

Below is the skeleton for a custom service with some documentation. Most people would likely only setup the required class variables (name/group). Then define the configs (files they want to generate) and implement the generate_config function to dynamically create the files wanted. Finally the startup commands would be supplied, which typically tends to be running the shell files generated.

\"\"\"\nSimple example custom service, used to drive shell commands on a node.\n\"\"\"\nfrom typing import Tuple\n\nfrom core.nodes.base import CoreNode\nfrom core.services.coreservices import CoreService, ServiceMode\n\n\nclass ExampleService(CoreService):\n\"\"\"\n    Example Custom CORE Service\n\n    :cvar name: name used as a unique ID for this service and is required, no spaces\n    :cvar group: allows you to group services within the GUI under a common name\n    :cvar executables: executables this service depends on to function, if executable is\n        not on the path, service will not be loaded\n    :cvar dependencies: services that this service depends on for startup, tuple of\n        service names\n    :cvar dirs: directories that this service will create within a node\n    :cvar configs: files that this service will generate, without a full path this file\n        goes in the node's directory e.g. /tmp/pycore.12345/n1.conf/myfile\n    :cvar startup: commands used to start this service, any non-zero exit code will\n        cause a failure\n    :cvar validate: commands used to validate that a service was started, any non-zero\n        exit code will cause a failure\n    :cvar validation_mode: validation mode, used to determine startup success.\n        NON_BLOCKING    - runs startup commands, and validates success with validation commands\n        BLOCKING        - runs startup commands, and validates success with the startup commands themselves\n        TIMER           - runs startup commands, and validates success by waiting for \"validation_timer\" alone\n    :cvar validation_timer: time in seconds for a service to wait for validation, before\n        determining success in TIMER/NON_BLOCKING modes.\n    :cvar validation_period: period in seconds to wait before retrying validation,\n        only used in NON_BLOCKING mode\n    :cvar shutdown: shutdown commands to stop this service\n    \"\"\"\n\n    name: str = \"ExampleService\"\n    group: str = \"Utility\"\n    executables: Tuple[str, ...] = ()\n    dependencies: Tuple[str, ...] = ()\n    dirs: Tuple[str, ...] = ()\n    configs: Tuple[str, ...] = (\"myservice1.sh\", \"myservice2.sh\")\n    startup: Tuple[str, ...] = tuple(f\"sh {x}\" for x in configs)\n    validate: Tuple[str, ...] = ()\n    validation_mode: ServiceMode = ServiceMode.NON_BLOCKING\n    validation_timer: int = 5\n    validation_period: float = 0.5\n    shutdown: Tuple[str, ...] = ()\n\n    @classmethod\n    def on_load(cls) -> None:\n\"\"\"\n        Provides a way to run some arbitrary logic when the service is loaded, possibly\n        to help facilitate dynamic settings for the environment.\n\n        :return: nothing\n        \"\"\"\n        pass\n\n    @classmethod\n    def get_configs(cls, node: CoreNode) -> Tuple[str, ...]:\n\"\"\"\n        Provides a way to dynamically generate the config files from the node a service\n        will run. Defaults to the class definition and can be left out entirely if not\n        needed.\n\n        :param node: core node that the service is being ran on\n        :return: tuple of config files to create\n        \"\"\"\n        return cls.configs\n\n    @classmethod\n    def generate_config(cls, node: CoreNode, filename: str) -> str:\n\"\"\"\n        Returns a string representation for a file, given the node the service is\n        starting on the config filename that this information will be used for. This\n        must be defined, if \"configs\" are defined.\n\n        :param node: core node that the service is being ran on\n        :param filename: configuration file to generate\n        :return: configuration file content\n        \"\"\"\n        cfg = \"#!/bin/sh\\n\"\n        if filename == cls.configs[0]:\n            cfg += \"# auto-generated by MyService (sample.py)\\n\"\n            for iface in node.get_ifaces():\n                cfg += f'echo \"Node {node.name} has interface {iface.name}\"\\n'\n        elif filename == cls.configs[1]:\n            cfg += \"echo hello\"\n        return cfg\n\n    @classmethod\n    def get_startup(cls, node: CoreNode) -> Tuple[str, ...]:\n\"\"\"\n        Provides a way to dynamically generate the startup commands from the node a\n        service will run. Defaults to the class definition and can be left out entirely\n        if not needed.\n\n        :param node: core node that the service is being ran on\n        :return: tuple of startup commands to run\n        \"\"\"\n        return cls.startup\n\n    @classmethod\n    def get_validate(cls, node: CoreNode) -> Tuple[str, ...]:\n\"\"\"\n        Provides a way to dynamically generate the validate commands from the node a\n        service will run. Defaults to the class definition and can be left out entirely\n        if not needed.\n\n        :param node: core node that the service is being ran on\n        :return: tuple of commands to validate service startup with\n        \"\"\"\n        return cls.validate\n
"},{"location":"emane/antenna.html","title":"EMANE Antenna Profiles","text":""},{"location":"emane/antenna.html#overview","title":"Overview","text":"

Introduction to using the EMANE antenna profile in CORE, based on the example EMANE Demo linked below.

EMANE Demo 6 for more specifics.

"},{"location":"emane/antenna.html#demo-setup","title":"Demo Setup","text":"

We will need to create some files in advance of starting this session.

Create directory to place antenna profile files.

mkdir /tmp/emane\n

Create /tmp/emane/antennaprofile.xml with the following contents.

<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE profiles SYSTEM \"file:///usr/share/emane/dtd/antennaprofile.dtd\">\n<profiles>\n<profile id=\"1\"\nantennapatternuri=\"/tmp/emane/antenna30dsector.xml\"\nblockagepatternuri=\"/tmp/emane/blockageaft.xml\">\n<placement north=\"0\" east=\"0\" up=\"0\"/>\n</profile>\n<profile id=\"2\"\nantennapatternuri=\"/tmp/emane/antenna30dsector.xml\">\n<placement north=\"0\" east=\"0\" up=\"0\"/>\n</profile>\n</profiles>\n

Create /tmp/emane/antenna30dsector.xml with the following contents.

<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE antennaprofile SYSTEM \"file:///usr/share/emane/dtd/antennaprofile.dtd\">\n\n<!-- 30degree sector antenna pattern with main beam at +6dB and gain decreasing by 3dB every 5 degrees in elevation or bearing.-->\n<antennaprofile>\n<antennapattern>\n<elevation min='-90' max='-16'>\n<bearing min='0' max='359'>\n<gain value='-200'/>\n</bearing>\n</elevation>\n<elevation min='-15' max='-11'>\n<bearing min='0' max='5'>\n<gain value='0'/>\n</bearing>\n<bearing min='6' max='10'>\n<gain value='-3'/>\n</bearing>\n<bearing min='11' max='15'>\n<gain value='-6'/>\n</bearing>\n<bearing min='16' max='344'>\n<gain value='-200'/>\n</bearing>\n<bearing min='345' max='349'>\n<gain value='-6'/>\n</bearing>\n<bearing min='350' max='354'>\n<gain value='-3'/>\n</bearing>\n<bearing min='355' max='359'>\n<gain value='0'/>\n</bearing>\n</elevation>\n<elevation min='-10' max='-6'>\n<bearing min='0' max='5'>\n<gain value='3'/>\n</bearing>\n<bearing min='6' max='10'>\n<gain value='0'/>\n</bearing>\n<bearing min='11' max='15'>\n<gain value='-3'/>\n</bearing>\n<bearing min='16' max='344'>\n<gain value='-200'/>\n</bearing>\n<bearing min='345' max='349'>\n<gain value='-3'/>\n</bearing>\n<bearing min='350' max='354'>\n<gain value='0'/>\n</bearing>\n<bearing min='355' max='359'>\n<gain value='3'/>\n</bearing>\n</elevation>\n<elevation min='-5' max='-1'>\n<bearing min='0' max='5'>\n<gain value='6'/>\n</bearing>\n<bearing min='6' max='10'>\n<gain value='3'/>\n</bearing>\n<bearing min='11' max='15'>\n<gain value='0'/>\n</bearing>\n<bearing min='16' max='344'>\n<gain value='-200'/>\n</bearing>\n<bearing min='345' max='349'>\n<gain value='0'/>\n</bearing>\n<bearing min='350' max='354'>\n<gain value='3'/>\n</bearing>\n<bearing min='355' max='359'>\n<gain value='6'/>\n</bearing>\n</elevation>\n<elevation min='0' max='5'>\n<bearing min='0' max='5'>\n<gain value='6'/>\n</bearing>\n<bearing min='6' max='10'>\n<gain value='3'/>\n</bearing>\n<bearing min='11' max='15'>\n<gain value='0'/>\n</bearing>\n<bearing min='16' max='344'>\n<gain value='-200'/>\n</bearing>\n<bearing min='345' max='349'>\n<gain value='0'/>\n</bearing>\n<bearing min='350' max='354'>\n<gain value='3'/>\n</bearing>\n<bearing min='355' max='359'>\n<gain value='6'/>\n</bearing>\n</elevation>\n<elevation min='6' max='10'>\n<bearing min='0' max='5'>\n<gain value='3'/>\n</bearing>\n<bearing min='6' max='10'>\n<gain value='0'/>\n</bearing>\n<bearing min='11' max='15'>\n<gain value='-3'/>\n</bearing>\n<bearing min='16' max='344'>\n<gain value='-200'/>\n</bearing>\n<bearing min='345' max='349'>\n<gain value='-3'/>\n</bearing>\n<bearing min='350' max='354'>\n<gain value='0'/>\n</bearing>\n<bearing min='355' max='359'>\n<gain value='3'/>\n</bearing>\n</elevation>\n<elevation min='11' max='15'>\n<bearing min='0' max='5'>\n<gain value='0'/>\n</bearing>\n<bearing min='6' max='10'>\n<gain value='-3'/>\n</bearing>\n<bearing min='11' max='15'>\n<gain value='-6'/>\n</bearing>\n<bearing min='16' max='344'>\n<gain value='-200'/>\n</bearing>\n<bearing min='345' max='349'>\n<gain value='-6'/>\n</bearing>\n<bearing min='350' max='354'>\n<gain value='-3'/>\n</bearing>\n<bearing min='355' max='359'>\n<gain value='0'/>\n</bearing>\n</elevation>\n<elevation min='16' max='90'>\n<bearing min='0' max='359'>\n<gain value='-200'/>\n</bearing>\n</elevation>\n</antennapattern>\n</antennaprofile>\n

Create /tmp/emane/blockageaft.xml with the following contents.

<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE antennaprofile SYSTEM \"file:///usr/share/emane/dtd/antennaprofile.dtd\">\n\n<!-- blockage pattern: 1) entire aft in bearing (90 to 270) blocked 2) elevation below -10 blocked, 3) elevation from -10 to -1 is at -10dB to -1 dB 3) elevation from 0 to 90 no blockage-->\n<antennaprofile>\n<blockagepattern>\n<elevation min='-90' max='-11'>\n<bearing min='0' max='359'>\n<gain value='-200'/>\n</bearing>\n</elevation>\n<elevation min='-10' max='-10'>\n<bearing min='0' max='89'>\n<gain value='-10'/>\n</bearing>\n<bearing min='90' max='270'>\n<gain value='-200'/>\n</bearing>\n<bearing min='271' max='359'>\n<gain value='-10'/>\n</bearing>\n</elevation>\n<elevation min='-9' max='-9'>\n<bearing min='0' max='89'>\n<gain value='-9'/>\n</bearing>\n<bearing min='90' max='270'>\n<gain value='-200'/>\n</bearing>\n<bearing min='271' max='359'>\n<gain value='-9'/>\n</bearing>\n</elevation>\n<elevation min='-8' max='-8'>\n<bearing min='0' max='89'>\n<gain value='-8'/>\n</bearing>\n<bearing min='90' max='270'>\n<gain value='-200'/>\n</bearing>\n<bearing min='271' max='359'>\n<gain value='-8'/>\n</bearing>\n</elevation>\n<elevation min='-7' max='-7'>\n<bearing min='0' max='89'>\n<gain value='-7'/>\n</bearing>\n<bearing min='90' max='270'>\n<gain value='-200'/>\n</bearing>\n<bearing min='271' max='359'>\n<gain value='-7'/>\n</bearing>\n</elevation>\n<elevation min='-6' max='-6'>\n<bearing min='0' max='89'>\n<gain value='-6'/>\n</bearing>\n<bearing min='90' max='270'>\n<gain value='-200'/>\n</bearing>\n<bearing min='271' max='359'>\n<gain value='-6'/>\n</bearing>\n</elevation>\n<elevation min='-5' max='-5'>\n<bearing min='0' max='89'>\n<gain value='-5'/>\n</bearing>\n<bearing min='90' max='270'>\n<gain value='-200'/>\n</bearing>\n<bearing min='271' max='359'>\n<gain value='-5'/>\n</bearing>\n</elevation>\n<elevation min='-4' max='-4'>\n<bearing min='0' max='89'>\n<gain value='-4'/>\n</bearing>\n<bearing min='90' max='270'>\n<gain value='-200'/>\n</bearing>\n<bearing min='271' max='359'>\n<gain value='-4'/>\n</bearing>\n</elevation>\n<elevation min='-3' max='-3'>\n<bearing min='0' max='89'>\n<gain value='-3'/>\n</bearing>\n<bearing min='90' max='270'>\n<gain value='-200'/>\n</bearing>\n<bearing min='271' max='359'>\n<gain value='-3'/>\n</bearing>\n</elevation>\n<elevation min='-2' max='-2'>\n<bearing min='0' max='89'>\n<gain value='-2'/>\n</bearing>\n<bearing min='90' max='270'>\n<gain value='-200'/>\n</bearing>\n<bearing min='271' max='359'>\n<gain value='-2'/>\n</bearing>\n</elevation>\n<elevation min='-1' max='-1'>\n<bearing min='0' max='89'>\n<gain value='-1'/>\n</bearing>\n<bearing min='90' max='270'>\n<gain value='-200'/>\n</bearing>\n<bearing min='271' max='359'>\n<gain value='-1'/>\n</bearing>\n</elevation>\n<elevation min='0' max='90'>\n<bearing min='0' max='89'>\n<gain value='0'/>\n</bearing>\n<bearing min='90' max='270'>\n<gain value='-200'/>\n</bearing>\n<bearing min='271' max='359'>\n<gain value='0'/>\n</bearing>\n</elevation>\n</blockagepattern>\n</antennaprofile>\n
"},{"location":"emane/antenna.html#run-demo","title":"Run Demo","text":"
  1. Select Open... within the GUI
  2. Load emane-demo-antenna.xml
  3. Click
  4. After startup completes, double click n1 to bring up the nodes terminal
"},{"location":"emane/antenna.html#example-demo","title":"Example Demo","text":"

This demo will cover running an EMANE event service to feed in antenna, location, and pathloss events to demonstrate how antenna profiles can be used.

"},{"location":"emane/antenna.html#emane-event-dump","title":"EMANE Event Dump","text":"

On n1 lets dump EMANE events, so when we later run the EMANE event service you can monitor when and what is sent.

root@n1:/tmp/pycore.44917/n1.conf# emaneevent-dump -i ctrl0\n
"},{"location":"emane/antenna.html#send-emane-events","title":"Send EMANE Events","text":"

On the host machine create the following to send EMANE events.

Warning

Make sure to set the eventservicedevice to the proper control network value

Create eventservice.xml with the following contents.

<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE eventservice SYSTEM \"file:///usr/share/emane/dtd/eventservice.dtd\">\n<eventservice>\n<param name=\"eventservicegroup\" value=\"224.1.2.8:45703\"/>\n<param name=\"eventservicedevice\" value=\"b.9001.da\"/>\n<generator definition=\"eelgenerator.xml\"/>\n</eventservice>\n

Create eelgenerator.xml with the following contents.

<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE eventgenerator SYSTEM \"file:///usr/share/emane/dtd/eventgenerator.dtd\">\n<eventgenerator library=\"eelgenerator\">\n<param name=\"inputfile\" value=\"scenario.eel\"/>\n<paramlist name=\"loader\">\n<item value=\"commeffect:eelloadercommeffect:delta\"/>\n<item value=\"location,velocity,orientation:eelloaderlocation:delta\"/>\n<item value=\"pathloss:eelloaderpathloss:delta\"/>\n<item value=\"antennaprofile:eelloaderantennaprofile:delta\"/>\n</paramlist>\n</eventgenerator>\n

Create scenario.eel with the following contents.

0.0 nem:1 antennaprofile 1,0.0,0.0\n0.0 nem:4 antennaprofile 2,0.0,0.0\n#\n0.0 nem:1  pathloss nem:2,60  nem:3,60   nem:4,60\n0.0 nem:2  pathloss nem:3,60  nem:4,60\n0.0 nem:3  pathloss nem:4,60\n#\n0.0 nem:1  location gps 40.025495,-74.315441,3.0\n0.0 nem:2  location gps 40.025495,-74.312501,3.0\n0.0 nem:3  location gps 40.023235,-74.315441,3.0\n0.0 nem:4  location gps 40.023235,-74.312501,3.0\n0.0 nem:4  velocity 180.0,0.0,10.0\n#\n30.0 nem:1 velocity 20.0,0.0,10.0\n30.0 nem:1 orientation 0.0,0.0,10.0\n30.0 nem:1 antennaprofile 1,60.0,0.0\n30.0 nem:4 velocity 270.0,0.0,10.0\n#\n60.0 nem:1 antennaprofile 1,105.0,0.0\n60.0 nem:4 antennaprofile 2,45.0,0.0\n#\n90.0 nem:1 velocity 90.0,0.0,10.0\n90.0 nem:1 orientation 0.0,0.0,0.0\n90.0 nem:1 antennaprofile 1,45.0,0.0\n

Run the EMANE event service, monitor what is output on n1 for events dumped and see the link changes within the CORE GUI.

emaneeventservice -l 3 eventservice.xml\n
"},{"location":"emane/antenna.html#stages","title":"Stages","text":"

The events sent will trigger 4 different states.

  • State 1
    • n2 and n3 see each other
    • n4 and n3 are pointing away
  • State 2
    • n2 and n3 see each other
    • n1 and n2 see each other
    • n4 and n3 see each other
  • State 3
    • n2 and n3 see each other
    • n4 and n3 are pointing at each other but blocked
  • State 4
    • n2 and n3 see each other
    • n4 and n3 see each other
"},{"location":"emane/eel.html","title":"EMANE Emulation Event Log (EEL) Generator","text":""},{"location":"emane/eel.html#overview","title":"Overview","text":"

Introduction to using the EMANE event service and eel files to provide events.

EMANE Demo 1 for more specifics.

"},{"location":"emane/eel.html#run-demo","title":"Run Demo","text":"
  1. Select Open... within the GUI
  2. Load emane-demo-eel.xml
  3. Click
  4. After startup completes, double click n1 to bring up the nodes terminal
"},{"location":"emane/eel.html#example-demo","title":"Example Demo","text":"

This demo will go over defining an EMANE event service and eel file to drive an emane event service.

"},{"location":"emane/eel.html#viewing-events","title":"Viewing Events","text":"

On n1 we will use the EMANE event dump utility to listen to events.

root@n1:/tmp/pycore.46777/n1.conf# emaneevent-dump -i ctrl0\n
"},{"location":"emane/eel.html#sending-events","title":"Sending Events","text":"

On the host machine we will create the following files and start the EMANE event service targeting the control network.

Warning

Make sure to set the eventservicedevice to the proper control network value

Create eventservice.xml with the following contents.

<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE eventservice SYSTEM \"file:///usr/share/emane/dtd/eventservice.dtd\">\n<eventservice>\n<param name=\"eventservicegroup\" value=\"224.1.2.8:45703\"/>\n<param name=\"eventservicedevice\" value=\"b.9001.f\"/>\n<generator definition=\"eelgenerator.xml\"/>\n</eventservice>\n

Next we will create the eelgenerator.xml file. The EEL Generator is actually a plugin that loads sentence parsing plugins. The sentence parsing plugins know how to convert certain sentences, in this case commeffect, location, velocity, orientation, pathloss and antennaprofile sentences, into their corresponding emane event equivalents.

  • commeffect:eelloadercommeffect:delta
  • location,velocity,orientation:eelloaderlocation:delta
  • pathloss:eelloaderpathloss:delta
  • antennaprofile:eelloaderantennaprofile:delta

These configuration items tell the EEL Generator which sentences to map to which plugin and whether to issue delta or full updates.

Create eelgenerator.xml with the following contents.

<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE eventgenerator SYSTEM \"file:///usr/share/emane/dtd/eventgenerator.dtd\">\n<eventgenerator library=\"eelgenerator\">\n<param name=\"inputfile\" value=\"scenario.eel\"/>\n<paramlist name=\"loader\">\n<item value=\"commeffect:eelloadercommeffect:delta\"/>\n<item value=\"location,velocity,orientation:eelloaderlocation:delta\"/>\n<item value=\"pathloss:eelloaderpathloss:delta\"/>\n<item value=\"antennaprofile:eelloaderantennaprofile:delta\"/>\n</paramlist>\n</eventgenerator>\n

Finally, create scenario.eel with the following contents.

0.0  nem:1 pathloss nem:2,90.0\n0.0  nem:2 pathloss nem:1,90.0\n0.0  nem:1 location gps 40.031075,-74.523518,3.000000\n0.0  nem:2 location gps 40.031165,-74.523412,3.000000\n

Start the EMANE event service using the files created above.

emaneeventservice eventservice.xml -l 3\n
"},{"location":"emane/eel.html#sent-events","title":"Sent Events","text":"

If we go back to look at our original terminal we will see the events logged out to the terminal.

root@n1:/tmp/pycore.46777/n1.conf# emaneevent-dump -i ctrl0\n[1601858142.917224] nem: 0 event: 100 len: 66 seq: 1 [Location]\nUUID: 0af267be-17d3-4103-9f76-6f697e13bcec\n   (1, {'latitude': 40.031075, 'altitude': 3.0, 'longitude': -74.823518})\n(2, {'latitude': 40.031165, 'altitude': 3.0, 'longitude': -74.523412})\n[1601858142.917466] nem: 1 event: 101 len: 14 seq: 2 [Pathloss]\nUUID: 0af267be-17d3-4103-9f76-6f697e13bcec\n   (2, {'forward': 90.0, 'reverse': 90.0})\n[1601858142.917889] nem: 2 event: 101 len: 14 seq: 3 [Pathloss]\nUUID: 0af267be-17d3-4103-9f76-6f697e13bcec\n   (1, {'forward': 90.0, 'reverse': 90.0})\n
"},{"location":"emane/files.html","title":"EMANE XML Files","text":""},{"location":"emane/files.html#overview","title":"Overview","text":"

Introduction to the XML files generated by CORE used to drive EMANE for a given node.

EMANE Demo 0 may provide more helpful details.

"},{"location":"emane/files.html#run-demo","title":"Run Demo","text":"
  1. Select Open... within the GUI
  2. Load emane-demo-files.xml
  3. Click
  4. After startup completes, double click n1 to bring up the nodes terminal
"},{"location":"emane/files.html#example-demo","title":"Example Demo","text":"

We will take a look at the files generated in the example demo provided. In this case we are running the RF Pipe model.

"},{"location":"emane/files.html#generated-files","title":"Generated Files","text":"Name Description \\-platform.xml configuration file for the emulator instances \\-nem.xml configuration for creating a NEM \\-mac.xml configuration for defining a NEMs MAC layer \\-phy.xml configuration for defining a NEMs PHY layer \\-trans-virtual.xml configuration when a virtual transport is being used \\-trans.xml configuration when a raw transport is being used"},{"location":"emane/files.html#listing-file","title":"Listing File","text":"

Below are the files within n1 after starting the demo session.

root@n1:/tmp/pycore.46777/n1.conf# ls\neth0-mac.xml  eth0-trans-virtual.xml  n1-platform.xml       var.log\neth0-nem.xml  ipforward.sh            quaggaboot.sh         var.run\neth0-phy.xml  n1-emane.log            usr.local.etc.quagga  var.run.quagga\n
"},{"location":"emane/files.html#platform-xml","title":"Platform XML","text":"

The root configuration file used to run EMANE for a node is the platform xml file. In this demo we are looking at n1-platform.xml.

  • lists all configuration values set for the platform
  • The unique nem id given for each interface that EMANE will create for this node
  • The path to the file(s) used for definition for a given nem
root@n1:/tmp/pycore.46777/n1.conf# cat n1-platform.xml\n<?xml version='1.0' encoding='UTF-8'?>\n<!DOCTYPE platform SYSTEM \"file:///usr/share/emane/dtd/platform.dtd\">\n<platform>\n  <param name=\"antennaprofilemanifesturi\" value=\"\"/>\n  <param name=\"controlportendpoint\" value=\"0.0.0.0:47000\"/>\n  <param name=\"eventservicedevice\" value=\"ctrl0\"/>\n  <param name=\"eventservicegroup\" value=\"224.1.2.8:45703\"/>\n  <param name=\"eventservicettl\" value=\"1\"/>\n  <param name=\"otamanagerchannelenable\" value=\"1\"/>\n  <param name=\"otamanagerdevice\" value=\"ctrl0\"/>\n  <param name=\"otamanagergroup\" value=\"224.1.2.8:45702\"/>\n  <param name=\"otamanagerloopback\" value=\"0\"/>\n  <param name=\"otamanagermtu\" value=\"0\"/>\n  <param name=\"otamanagerpartcheckthreshold\" value=\"2\"/>\n  <param name=\"otamanagerparttimeoutthreshold\" value=\"5\"/>\n  <param name=\"otamanagerttl\" value=\"1\"/>\n  <param name=\"stats.event.maxeventcountrows\" value=\"0\"/>\n  <param name=\"stats.ota.maxeventcountrows\" value=\"0\"/>\n  <param name=\"stats.ota.maxpacketcountrows\" value=\"0\"/>\n  <nem id=\"1\" name=\"tap1.0.f\" definition=\"eth0-nem.xml\">\n    <transport definition=\"eth0-trans-virtual.xml\">\n      <param name=\"device\" value=\"eth0\"/>\n    </transport>\n  </nem>\n</platform>\n
"},{"location":"emane/files.html#nem-xml","title":"NEM XML","text":"

The nem definition will contain reference to the transport, mac, and phy xml definitions being used for a given nem.

root@n1:/tmp/pycore.46777/n1.conf# cat eth0-nem.xml\n<?xml version='1.0' encoding='UTF-8'?>\n<!DOCTYPE nem SYSTEM \"file:///usr/share/emane/dtd/nem.dtd\">\n<nem name=\"emane_rfpipe NEM\">\n  <transport definition=\"eth0-trans-virtual.xml\"/>\n  <mac definition=\"eth0-mac.xml\"/>\n  <phy definition=\"eth0-phy.xml\"/>\n</nem>\n
"},{"location":"emane/files.html#mac-xml","title":"MAC XML","text":"

MAC layer configuration settings would be found in this file. CORE will write out all values, even if the value is a default value.

root@n1:/tmp/pycore.46777/n1.conf# cat eth0-mac.xml\n<?xml version='1.0' encoding='UTF-8'?>\n<!DOCTYPE mac SYSTEM \"file:///usr/share/emane/dtd/mac.dtd\">\n<mac name=\"emane_rfpipe MAC\" library=\"rfpipemaclayer\">\n  <param name=\"datarate\" value=\"1000000\"/>\n  <param name=\"delay\" value=\"0.000000\"/>\n  <param name=\"enablepromiscuousmode\" value=\"0\"/>\n  <param name=\"flowcontrolenable\" value=\"0\"/>\n  <param name=\"flowcontroltokens\" value=\"10\"/>\n  <param name=\"jitter\" value=\"0.000000\"/>\n  <param name=\"neighbormetricdeletetime\" value=\"60.000000\"/>\n  <param name=\"pcrcurveuri\" value=\"/usr/share/emane/xml/models/mac/rfpipe/rfpipepcr.xml\"/>\n  <param name=\"radiometricenable\" value=\"0\"/>\n  <param name=\"radiometricreportinterval\" value=\"1.000000\"/>\n</mac>\n
"},{"location":"emane/files.html#phy-xml","title":"PHY XML","text":"

PHY layer configuration settings would be found in this file. CORE will write out all values, even if the value is a default value.

root@n1:/tmp/pycore.46777/n1.conf# cat eth0-phy.xml\n<?xml version='1.0' encoding='UTF-8'?>\n<!DOCTYPE phy SYSTEM \"file:///usr/share/emane/dtd/phy.dtd\">\n<phy name=\"emane_rfpipe PHY\">\n  <param name=\"bandwidth\" value=\"1000000\"/>\n  <param name=\"fading.model\" value=\"none\"/>\n  <param name=\"fading.nakagami.distance0\" value=\"100.000000\"/>\n  <param name=\"fading.nakagami.distance1\" value=\"250.000000\"/>\n  <param name=\"fading.nakagami.m0\" value=\"0.750000\"/>\n  <param name=\"fading.nakagami.m1\" value=\"1.000000\"/>\n  <param name=\"fading.nakagami.m2\" value=\"200.000000\"/>\n  <param name=\"fixedantennagain\" value=\"0.000000\"/>\n  <param name=\"fixedantennagainenable\" value=\"1\"/>\n  <param name=\"frequency\" value=\"2347000000\"/>\n  <param name=\"frequencyofinterest\" value=\"2347000000\"/>\n  <param name=\"noisebinsize\" value=\"20\"/>\n  <param name=\"noisemaxclampenable\" value=\"0\"/>\n  <param name=\"noisemaxmessagepropagation\" value=\"200000\"/>\n  <param name=\"noisemaxsegmentduration\" value=\"1000000\"/>\n  <param name=\"noisemaxsegmentoffset\" value=\"300000\"/>\n  <param name=\"noisemode\" value=\"none\"/>\n  <param name=\"propagationmodel\" value=\"2ray\"/>\n  <param name=\"subid\" value=\"1\"/>\n  <param name=\"systemnoisefigure\" value=\"4.000000\"/>\n  <param name=\"timesyncthreshold\" value=\"10000\"/>\n  <param name=\"txpower\" value=\"0.000000\"/>\n</phy>\n
"},{"location":"emane/files.html#transport-xml","title":"Transport XML","text":"
root@n1:/tmp/pycore.46777/n1.conf# cat eth0-trans-virtual.xml\n<?xml version='1.0' encoding='UTF-8'?>\n<!DOCTYPE transport SYSTEM \"file:///usr/share/emane/dtd/transport.dtd\">\n<transport name=\"Virtual Transport\" library=\"transvirtual\">\n  <param name=\"bitrate\" value=\"0\"/>\n  <param name=\"devicepath\" value=\"/dev/net/tun\"/>\n</transport>\n
"},{"location":"emane/gpsd.html","title":"EMANE GPSD Integration","text":""},{"location":"emane/gpsd.html#overview","title":"Overview","text":"

Introduction to integrating gpsd in CORE with EMANE.

EMANE Demo 0 may provide more helpful details.

Warning

Requires installation of gpsd

"},{"location":"emane/gpsd.html#run-demo","title":"Run Demo","text":"
  1. Select Open... within the GUI
  2. Load emane-demo-gpsd.xml
  3. Click
  4. After startup completes, double click n1 to bring up the nodes terminal
"},{"location":"emane/gpsd.html#example-demo","title":"Example Demo","text":"

This section will cover how to run a gpsd location agent within EMANE, that will write out locations to a pseudo terminal file. That file can be read in by the gpsd server and make EMANE location events available to gpsd clients.

"},{"location":"emane/gpsd.html#emane-gpsd-event-daemon","title":"EMANE GPSD Event Daemon","text":"

First create an eventdaemon.xml file on n1 with the following contents.

<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE eventdaemon SYSTEM \"file:///usr/share/emane/dtd/eventdaemon.dtd\">\n<eventdaemon nemid=\"1\">\n<param name=\"eventservicegroup\" value=\"224.1.2.8:45703\"/>\n<param name=\"eventservicedevice\" value=\"ctrl0\"/>\n<agent definition=\"gpsdlocationagent.xml\"/>\n</eventdaemon>\n

Then create the gpsdlocationagent.xml file on n1 with the following contents.

<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE eventagent SYSTEM \"file:///usr/share/emane/dtd/eventagent.dtd\">\n<eventagent library=\"gpsdlocationagent\">\n<param name=\"pseudoterminalfile\" value=\"gps.pty\"/>\n</eventagent>\n

Start the EMANE event agent. This will facilitate feeding location events out to a pseudo terminal file defined above.

emaneeventd eventdaemon.xml -r -d -l 3 -f emaneeventd.log\n

Start gpsd, reading in the pseudo terminal file.

gpsd -G -n -b $(cat gps.pty)\n
"},{"location":"emane/gpsd.html#emane-eel-event-daemon","title":"EMANE EEL Event Daemon","text":"

EEL Events will be played out from the actual host machine over the designated control network interface. Create the following files in the same directory somewhere on your host.

Note

Make sure the below eventservicedevice matches the control network device being used on the host for EMANE

Create eventservice.xml on the host machine with the following contents.

<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE eventservice SYSTEM \"file:///usr/share/emane/dtd/eventservice.dtd\">\n<eventservice>\n<param name=\"eventservicegroup\" value=\"224.1.2.8:45703\"/>\n<param name=\"eventservicedevice\" value=\"b.9001.1\"/>\n<generator definition=\"eelgenerator.xml\"/>\n</eventservice>\n

Create eelgenerator.xml on the host machine with the following contents.

<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE eventgenerator SYSTEM \"file:///usr/share/emane/dtd/eventgenerator.dtd\">\n<eventgenerator library=\"eelgenerator\">\n<param name=\"inputfile\" value=\"scenario.eel\"/>\n<paramlist name=\"loader\">\n<item value=\"commeffect:eelloadercommeffect:delta\"/>\n<item value=\"location,velocity,orientation:eelloaderlocation:delta\"/>\n<item value=\"pathloss:eelloaderpathloss:delta\"/>\n<item value=\"antennaprofile:eelloaderantennaprofile:delta\"/>\n</paramlist>\n</eventgenerator>\n

Create scenario.eel file with the following contents.

0.0  nem:1 location gps 40.031075,-74.523518,3.000000\n0.0  nem:2 location gps 40.031165,-74.523412,3.000000\n

Start the EEL event service, which will send the events defined in the file above over the control network to all EMANE nodes. These location events will be received and provided to gpsd. This allows gpsd client to connect to and get gps locations.

emaneeventservice eventservice.xml -l 3\n
"},{"location":"emane/precomputed.html","title":"EMANE Procomputed","text":""},{"location":"emane/precomputed.html#overview","title":"Overview","text":"

Introduction to using the precomputed propagation model.

EMANE Demo 1 for more specifics.

"},{"location":"emane/precomputed.html#run-demo","title":"Run Demo","text":"
  1. Select Open... within the GUI
  2. Load emane-demo-precomputed.xml
  3. Click
  4. After startup completes, double click n1 to bring up the nodes terminal
"},{"location":"emane/precomputed.html#example-demo","title":"Example Demo","text":"

This demo is using the RF Pipe model with the propagation model set to precomputed.

"},{"location":"emane/precomputed.html#failed-pings","title":"Failed Pings","text":"

Due to using precomputed and having not sent any pathloss events, the nodes cannot ping each other yet.

Open a terminal on n1.

root@n1:/tmp/pycore.46777/n1.conf# ping 10.0.0.2\nconnect: Network is unreachable\n
"},{"location":"emane/precomputed.html#emane-shell","title":"EMANE Shell","text":"

You can leverage emanesh to investigate why packets are being dropped.

root@n1:/tmp/pycore.46777/n1.conf# emanesh localhost get table nems phy BroadcastPacketDropTable0 UnicastPacketDropTable0\nnem 1   phy BroadcastPacketDropTable0\n| NEM | Out-of-Band | Rx Sensitivity | Propagation Model | Gain Location | Gain Horizon | Gain Profile | Not FOI | Spectrum Clamp | Fade Location | Fade Algorithm | Fade Select |\n| 2   | 0           | 0              | 169               | 0             | 0            | 0            | 0       | 0              | 0             | 0              | 0           |\n\nnem 1   phy UnicastPacketDropTable0\n| NEM | Out-of-Band | Rx Sensitivity | Propagation Model | Gain Location | Gain Horizon | Gain Profile | Not FOI | Spectrum Clamp | Fade Location | Fade Algorithm | Fade Select |\n

In the example above we can see that the reason packets are being dropped is due to the propogation model and that is because we have not issued any pathloss events. You can run another command to validate if you have received any pathloss events.

root@n1:/tmp/pycore.46777/n1.conf# emanesh localhost get table nems phy  PathlossEventInfoTable\nnem 1   phy PathlossEventInfoTable\n| NEM | Forward Pathloss | Reverse Pathloss |\n
"},{"location":"emane/precomputed.html#pathloss-events","title":"Pathloss Events","text":"

On the host we will send pathloss events from all nems to all other nems.

Note

Make sure properly specify the right control network device

emaneevent-pathloss 1:2 90 -i <controlnet device>\n

Now if we check for pathloss events on n2 we will see what was just sent above.

root@n1:/tmp/pycore.46777/n1.conf# emanesh localhost get table nems phy  PathlossEventInfoTable\nnem 1   phy PathlossEventInfoTable\n| NEM | Forward Pathloss | Reverse Pathloss |\n| 2   | 90.0             | 90.0\n

You should also now be able to ping n1 from n2.

root@n1:/tmp/pycore.46777/n1.conf# ping -c 3 10.0.0.2\nPING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.\n64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=3.06 ms\n64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=2.12 ms\n64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=1.99 ms\n\n--- 10.0.0.2 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2001ms\nrtt min/avg/max/mdev = 1.991/2.393/3.062/0.479 ms\n
"},{"location":"services/bird.html","title":"BIRD Internet Routing Daemon","text":""},{"location":"services/bird.html#overview","title":"Overview","text":"

The BIRD Internet Routing Daemon is a routing daemon; i.e., a software responsible for managing kernel packet forwarding tables. It aims to develop a dynamic IP routing daemon with full support of all modern routing protocols, easy to use configuration interface and powerful route filtering language, primarily targeted on (but not limited to) Linux and other UNIX-like systems and distributed under the GNU General Public License. BIRD has a free implementation of several well known and common routing and router-supplemental protocols, namely RIP, RIPng, OSPFv2, OSPFv3, BGP, BFD, and NDP/RA. BIRD supports IPv4 and IPv6 address families, Linux kernel and several BSD variants (tested on FreeBSD, NetBSD and OpenBSD). BIRD consists of bird daemon and birdc interactive CLI client used for supervision.

In order to be able to use the BIRD Internet Routing Protocol, you must first install the project on your machine.

"},{"location":"services/bird.html#bird-package-install","title":"BIRD Package Install","text":"
sudo apt-get install bird\n
"},{"location":"services/bird.html#bird-source-code-install","title":"BIRD Source Code Install","text":"

You can download BIRD source code from its official repository.

./configure\nmake\nsu\nmake install\nvi /etc/bird/bird.conf\n

The installation will place the bird directory inside /etc where you will also find its config file.

In order to be able to do use the Bird Internet Routing Protocol, you must modify bird.conf due to the fact that the given configuration file is not configured beyond allowing the bird daemon to start, which means that nothing else will happen if you run it.

"},{"location":"services/emane.html","title":"EMANE Services","text":""},{"location":"services/emane.html#overview","title":"Overview","text":"

EMANE related services for CORE.

"},{"location":"services/emane.html#transport-service","title":"Transport Service","text":"

Helps with setting up EMANE for using an external transport.

"},{"location":"services/frr.html","title":"FRRouting","text":""},{"location":"services/frr.html#overview","title":"Overview","text":"

FRRouting is a routing software package that provides TCP/IP based routing services with routing protocols support such as BGP, RIP, OSPF, IS-IS and more. FRR also supports special BGP Route Reflector and Route Server behavior. In addition to traditional IPv4 routing protocols, FRR also supports IPv6 routing protocols. With an SNMP daemon that supports the AgentX protocol, FRR provides routing protocol MIB read-only access (SNMP Support).

FRR (as of v7.2) currently supports the following protocols:

  • BGPv4
  • OSPFv2
  • OSPFv3
  • RIPv1/v2/ng
  • IS-IS
  • PIM-SM/MSDP/BSM(AutoRP)
  • LDP
  • BFD
  • Babel
  • PBR
  • OpenFabric
  • VRRPv2/v3
  • EIGRP (alpha)
  • NHRP (alpha)
"},{"location":"services/frr.html#frrouting-package-install","title":"FRRouting Package Install","text":"

Ubuntu 19.10 and later

sudo apt update && sudo apt install frr\n

Ubuntu 16.04 and Ubuntu 18.04

sudo apt install curl\ncurl -s https://deb.frrouting.org/frr/keys.asc | sudo apt-key add -\nFRRVER=\"frr-stable\"\necho deb https://deb.frrouting.org/frr $(lsb_release -s -c) $FRRVER | sudo tee -a /etc/apt/sources.list.d/frr.list\nsudo apt update && sudo apt install frr frr-pythontools\n

Fedora 31

sudo dnf update && sudo dnf install frr\n
"},{"location":"services/frr.html#frrouting-source-code-install","title":"FRRouting Source Code Install","text":"

Building FRR from source is the best way to ensure you have the latest features and bug fixes. Details for each supported platform, including dependency package listings, permissions, and other gotchas, are in the developer\u2019s documentation.

FRR\u2019s source is available on the project GitHub page.

git clone https://github.com/FRRouting/frr.git\n

Change into your FRR source directory and issue:

./bootstrap.sh\n

Then, choose the configuration options that you wish to use for the installation. You can find these options on FRR's official webpage. Once you have chosen your configure options, run the configure script and pass the options you chose:

./configure \\\n--prefix=/usr \\\n--enable-exampledir=/usr/share/doc/frr/examples/ \\\n--localstatedir=/var/run/frr \\\n--sbindir=/usr/lib/frr \\\n--sysconfdir=/etc/frr \\\n--enable-pimd \\\n--enable-watchfrr \\\n...\n

After configuring the software, you are ready to build and install it in your system.

make && sudo make install\n

If everything finishes successfully, FRR should be installed.

"},{"location":"services/nrl.html","title":"NRL Services","text":""},{"location":"services/nrl.html#overview","title":"Overview","text":"

The Protean Protocol Prototyping Library (ProtoLib) is a cross-platform library that allows applications to be built while supporting a variety of platforms including Linux, Windows, WinCE/PocketPC, MacOS, FreeBSD, Solaris, etc as well as the simulation environments of NS2 and Opnet. The goal of the Protolib is to provide a set of simple, cross-platform C++ classes that allow development of network protocols and applications that can run on different platforms and in network simulation environments. While Protolib provides an overall framework for developing working protocol implementations, applications, and simulation modules, the individual classes are designed for use as stand-alone components when possible. Although Protolib is principally for research purposes, the code has been constructed to provide robust, efficient performance and adaptability to real applications. In some cases, the code consists of data structures, etc useful in protocol implementations and, in other cases, provides common, cross-platform interfaces to system services and functions (e.g., sockets, timers, routing tables, etc).

Currently, the Naval Research Laboratory uses this library to develop a wide variety of protocols.The NRL Protolib currently supports the following protocols:

  • MGEN_Sink
  • NHDP
  • SMF
  • OLSR
  • OLSRv2
  • OLSRORG
  • MgenActor
  • arouted
"},{"location":"services/nrl.html#nrl-installation","title":"NRL Installation","text":"

In order to be able to use the different protocols that NRL offers, you must first download the support library itself. You can get the source code from their NRL Protolib Repo.

"},{"location":"services/nrl.html#multi-generator-mgen","title":"Multi-Generator (MGEN)","text":"

Download MGEN from the NRL MGEN Repo, unpack it and copy the protolib library into the main folder mgen. Execute the following commands to build the protocol.

cd mgen/makefiles\nmake -f Makefile.{os} mgen\n
"},{"location":"services/nrl.html#neighborhood-discovery-protocol-nhdp","title":"Neighborhood Discovery Protocol (NHDP)","text":"

Download NHDP from the NRL NHDP Repo.

sudo apt-get install libpcap-dev libboost-all-dev\nwget https://github.com/protocolbuffers/protobuf/releases/download/v3.8.0/protoc-3.8.0-linux-x86_64.zip\nunzip protoc-3.8.0-linux-x86_64.zip\n

Then place the binaries in your $PATH. To know your paths you can issue the following command

echo $PATH\n

Go to the downloaded NHDP tarball, unpack it and place the protolib library inside the NHDP main folder. Now, compile the NHDP Protocol.

cd nhdp/unix\nmake -f Makefile.{os}\n
"},{"location":"services/nrl.html#simplified-multicast-forwarding-smf","title":"Simplified Multicast Forwarding (SMF)","text":"

Download SMF from the NRL SMF Repo , unpack it and place the protolib library inside the smf main folder.

cd mgen/makefiles\nmake -f Makefile.{os}\n
"},{"location":"services/nrl.html#optimized-link-state-routing-protocol-olsr","title":"Optimized Link State Routing Protocol (OLSR)","text":"

To install the OLSR protocol, download their source code from their NRL OLSR Repo. Unpack it and place the previously downloaded protolib library inside the nrlolsr main directory. Then execute the following commands:

cd ./unix\nmake -f Makefile.{os}\n
"},{"location":"services/quagga.html","title":"Quagga Routing Suite","text":""},{"location":"services/quagga.html#overview","title":"Overview","text":"

Quagga is a routing software suite, providing implementations of OSPFv2, OSPFv3, RIP v1 and v2, RIPng and BGP-4 for Unix platforms, particularly FreeBSD, Linux, Solaris and NetBSD. Quagga is a fork of GNU Zebra which was developed by Kunihiro Ishiguro. The Quagga architecture consists of a core daemon, zebra, which acts as an abstraction layer to the underlying Unix kernel and presents the Zserv API over a Unix or TCP stream to Quagga clients. It is these Zserv clients which typically implement a routing protocol and communicate routing updates to the zebra daemon.

"},{"location":"services/quagga.html#quagga-package-install","title":"Quagga Package Install","text":"
sudo apt-get install quagga\n
"},{"location":"services/quagga.html#quagga-source-install","title":"Quagga Source Install","text":"

First, download the source code from their official webpage.

sudo apt-get install gawk\n

Extract the tarball, go to the directory of your currently extracted code and issue the following commands.

./configure\nmake\nsudo make install\n
"},{"location":"services/sdn.html","title":"Software Defined Networking","text":""},{"location":"services/sdn.html#overview","title":"Overview","text":"

Ryu is a component-based software defined networking framework. Ryu provides software components with well defined API that make it easy for developers to create new network management and control applications. Ryu supports various protocols for managing network devices, such as OpenFlow, Netconf, OF-config, etc. About OpenFlow, Ryu supports fully 1.0, 1.2, 1.3, 1.4, 1.5 and Nicira Extensions. All of the code is freely available under the Apache 2.0 license.

"},{"location":"services/sdn.html#installation","title":"Installation","text":""},{"location":"services/sdn.html#prerequisites","title":"Prerequisites","text":"
sudo apt-get install gcc python-dev libffi-dev libssl-dev libxml2-dev libxslt1-dev zlib1g-dev\n
"},{"location":"services/sdn.html#ryu-package-install","title":"Ryu Package Install","text":"
pip install ryu\n
"},{"location":"services/sdn.html#ryu-source-install","title":"Ryu Source Install","text":"
git clone git://github.com/osrg/ryu.git\ncd ryu\npip install .\n
"},{"location":"services/security.html","title":"Security Services","text":""},{"location":"services/security.html#overview","title":"Overview","text":"

The security services offer a wide variety of protocols capable of satisfying the most use cases available. Security services such as IP security protocols, for providing security at the IP layer, as well as the suite of protocols designed to provide that security, through authentication and encryption of IP network packets. Virtual Private Networks (VPNs) and Firewalls are also available for use to the user.

"},{"location":"services/security.html#installation","title":"Installation","text":"

Libraries needed for some security services.

sudo apt-get install ipsec-tools racoon\n
"},{"location":"services/security.html#openvpn","title":"OpenVPN","text":"

Below is a set of instruction for running a very simple OpenVPN client/server scenario.

"},{"location":"services/security.html#installation_1","title":"Installation","text":"
# install openvpn\nsudo apt install openvpn\n\n# retrieve easyrsa3 for key/cert generation\ngit clone https://github.com/OpenVPN/easy-rsa\n
"},{"location":"services/security.html#generating-keyscerts","title":"Generating Keys/Certs","text":"
# navigate into easyrsa3 repo subdirectory that contains built binary\ncd easy-rsa/easyrsa3\n\n# initalize pki\n./easyrsa init-pki\n\n# build ca\n./easyrsa build-ca\n\n# generate and sign server keypair(s)\nSERVER_NAME=server1\n./easyrsa get-req $SERVER_NAME nopass\n./easyrsa sign-req server $SERVER_NAME\n\n# generate and sign client keypair(s)\nCLIENT_NAME=client1\n./easyrsa get-req $CLIENT_NAME nopass\n./easyrsa sign-req client $CLIENT_NAME\n\n# DH generation\n./easyrsa gen-dh\n\n# create directory for keys for CORE to use\n# NOTE: the default is set to a directory that requires using sudo, but can be\n# anywhere and not require sudo at all\nKEYDIR=/etc/core/keys\nsudo mkdir $KEYDIR\n\n# move keys to directory\nsudo cp pki/ca.crt $KEYDIR\nsudo cp pki/issued/*.crt $KEYDIR\nsudo cp pki/private/*.key $KEYDIR\nsudo cp pki/dh.pem $KEYDIR/dh1024.pem\n
"},{"location":"services/security.html#configure-server-nodes","title":"Configure Server Nodes","text":"

Add VPNServer service to nodes desired for running an OpenVPN server.

Modify sampleVPNServer for the following

  • Edit keydir key/cert directory
  • Edit keyname to use generated server name above
  • Edit vpnserver to match an address that the server node will have
"},{"location":"services/security.html#configure-client-nodes","title":"Configure Client Nodes","text":"

Add VPNClient service to nodes desired for acting as an OpenVPN client.

Modify sampleVPNClient for the following

  • Edit keydir key/cert directory
  • Edit keyname to use generated client name above
  • Edit vpnserver to match the address a server was configured to use
"},{"location":"services/utility.html","title":"Utility Services","text":""},{"location":"services/utility.html#overview","title":"Overview","text":"

Variety of convenience services for carrying out common networking changes.

The following services are provided as utilities:

  • UCARP
  • IP Forward
  • Default Routing
  • Default Muticast Routing
  • Static Routing
  • SSH
  • DHCP
  • DHCP Client
  • FTP
  • HTTP
  • PCAP
  • RADVD
  • ATD
"},{"location":"services/utility.html#installation","title":"Installation","text":"

To install the functionality of the previously metioned services you can run the following command:

sudo apt-get install isc-dhcp-server apache2 libpcap-dev radvd at\n
"},{"location":"services/utility.html#ucarp","title":"UCARP","text":"

UCARP allows a couple of hosts to share common virtual IP addresses in order to provide automatic failover. It is a portable userland implementation of the secure and patent-free Common Address Redundancy Protocol (CARP, OpenBSD's alternative to the patents-bloated VRRP).

Strong points of the CARP protocol are: very low overhead, cryptographically signed messages, interoperability between different operating systems and no need for any dedicated extra network link between redundant hosts.

"},{"location":"services/utility.html#installation_1","title":"Installation","text":"
sudo apt-get install ucarp\n
"},{"location":"services/xorp.html","title":"XORP routing suite","text":""},{"location":"services/xorp.html#overview","title":"Overview","text":"

XORP is an open networking platform that supports OSPF, RIP, BGP, OLSR, VRRP, PIM, IGMP (Multicast) and other routing protocols. Most protocols support IPv4 and IPv6 where applicable. It is known to work on various Linux distributions and flavors of BSD.

XORP started life as a project at the ICSI Center for Open Networking (ICON) at the International Computer Science Institute in Berkeley, California, USA, and spent some time with the team at XORP, Inc. It is now maintained and improved on a volunteer basis by a core of long-term XORP developers and some newer contributors.

XORP's primary goal is to be an open platform for networking protocol implementations and an alternative to proprietary and closed networking products in the marketplace today. It is the only open source platform to offer integrated multicast capability.

XORP design philosophy is:

  • modularity
  • extensibility
  • performance
  • robustness This is achieved by carefully separating functionalities into independent modules, and by providing an API for each module.

XORP divides into two subsystems. The higher-level (\"user-level\") subsystem consists of the routing protocols. The lower-level (\"kernel\") manages the forwarding path, and provides APIs for the higher-level to access.

User-level XORP uses multi-process architecture with one process per routing protocol, and a novel inter-process communication mechanism called XRL (XORP Resource Locator).

The lower-level subsystem can use traditional UNIX kernel forwarding, or Click modular router. The modularity and independency of the lower-level from the user-level subsystem allows for its easily replacement with other solutions including high-end hardware-based forwarding engines.

"},{"location":"services/xorp.html#installation","title":"Installation","text":"

In order to be able to install the XORP Routing Suite, you must first install scons in order to compile it.

sudo apt-get install scons\n

Then, download XORP from its official release web page.

http://www.xorp.org/releases/current/\ncd xorp\nsudo apt-get install libssl-dev ncurses-dev\nscons\nscons install\n
"},{"location":"tutorials/overview.html","title":"CORE Tutorials","text":"

These tutorials will cover various use cases within CORE. These tutorials will provide example python, gRPC, XML, and related files, as well as an explanation for their usage and purpose.

"},{"location":"tutorials/overview.html#checklist","title":"Checklist","text":"

These are the items you should become familiar with for running all the tutorials below.

  • Install CORE
  • Tutorial Setup
"},{"location":"tutorials/overview.html#tutorials","title":"Tutorials","text":"
  • Tutorial 1 - Wired Network
    • Covers interactions when using a simple 2 node wired network
  • Tutorial 2 - Wireless Network
    • Covers interactions when using a simple 3 node wireless network
  • Tutorial 3 - Basic Mobility
    • Covers mobility interactions when using a simple 3 node wireless network
  • Tutorial 4 - Tests
    • Covers automating scenarios as tests to validate software
  • Tutorial 5 - RJ45 Node
    • Covers using the RJ45 node to connect a Windows OS
  • Tutorial 6 - Improve Visuals
    • Covers changing the look of a scenario within the CORE GUI
  • Tutorial 7 - EMANE
    • Covers using EMANE within CORE for higher fidelity RF networks
"},{"location":"tutorials/setup.html","title":"Tutorial Setup","text":""},{"location":"tutorials/setup.html#setup-for-core","title":"Setup for CORE","text":"

We assume the prior installation of CORE, using a virtual environment. You can then adjust your PATH and add an alias to help more conveniently run CORE commands.

This can be setup in your .bashrc

export PATH=$PATH:/opt/core/venv/bin\nalias sudop='sudo env PATH=$PATH'\n
"},{"location":"tutorials/setup.html#setup-for-chat-app","title":"Setup for Chat App","text":"

There is a simple TCP chat app provided as example software to use and run within the tutorials provided.

"},{"location":"tutorials/setup.html#installation","title":"Installation","text":"

The following will install chatapp and its scripts into /usr/local, which you may need to add to PATH within node to be able to use command directly.

sudo python3 -m pip install .\n

Note

Some Linux distros will not have /usr/local in their PATH and you will need to compensate.

export PATH=$PATH:/usr/local\n
"},{"location":"tutorials/setup.html#running-the-server","title":"Running the Server","text":"

The server will print and log connected clients and their messages.

usage: chatapp-server [-h] [-a ADDRESS] [-p PORT]\n\nchat app server\n\noptional arguments:\n  -h, --help            show this help message and exit\n-a ADDRESS, --address ADDRESS\n                        address to listen on (default: )\n-p PORT, --port PORT  port to listen on (default: 9001)\n
"},{"location":"tutorials/setup.html#running-the-client","title":"Running the Client","text":"

The client will print and log messages from other clients and their join/leave status.

usage: chatapp-client [-h] -a ADDRESS [-p PORT]\n\nchat app client\n\noptional arguments:\n  -h, --help            show this help message and exit\n-a ADDRESS, --address ADDRESS\n                        address to listen on (default: None)\n-p PORT, --port PORT  port to listen on (default: 9001)\n
"},{"location":"tutorials/setup.html#installing-the-chat-app-service","title":"Installing the Chat App Service","text":"
  1. You will first need to edit /etc/core/core.conf to update the config service path to pick up your service
    custom_config_services_dir = <path for service>\n
  2. Then you will need to copy/move chatapp/chatapp_service.py to the directory configured above
  3. Then you will need to restart the core-daemon to pick up this new service
  4. Now the service will be an available option under the group ChatApp with the name ChatApp Server
"},{"location":"tutorials/tutorial1.html","title":"Tutorial 1 - Wired Network","text":""},{"location":"tutorials/tutorial1.html#overview","title":"Overview","text":"

This tutorial will cover some use cases when using a wired 2 node scenario in CORE.

"},{"location":"tutorials/tutorial1.html#files","title":"Files","text":"

Below is the list of files used for this tutorial.

  • 2 node wired scenario
    • scenario.xml
    • scenario.py
  • 2 node wired scenario, with n1 running the \"Chat App Server\" service
    • scenario_service.xml
    • scenario_service.py
"},{"location":"tutorials/tutorial1.html#running-this-tutorial","title":"Running this Tutorial","text":"

This section covers interactions that can be carried out for this scenario.

Our scenario has the following nodes and addresses:

  • n1 - 10.0.0.20
  • n2 - 10.0.0.21

All usages below assume a clean scenario start.

"},{"location":"tutorials/tutorial1.html#using-ping","title":"Using Ping","text":"

Using the command line utility ping can be a good way to verify connectivity between nodes in CORE.

  • Make sure the CORE daemon is running a terminal, if not already
    sudop core-daemon\n
  • In another terminal run the GUI
    core-gui\n
  • In the GUI menu bar select File->Open..., then navigate to and select scenario.xml

  • You can now click on the Start Session button to run the scenario

  • Open a terminal on n1 by double clicking it in the GUI

  • Run the following in n1 terminal
    ping -c 3 10.0.0.21\n
  • You should see the following output
    PING 10.0.0.21 (10.0.0.21) 56(84) bytes of data.\n64 bytes from 10.0.0.21: icmp_seq=1 ttl=64 time=0.085 ms\n64 bytes from 10.0.0.21: icmp_seq=2 ttl=64 time=0.079 ms\n64 bytes from 10.0.0.21: icmp_seq=3 ttl=64 time=0.072 ms\n\n--- 10.0.0.21 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 1999ms\nrtt min/avg/max/mdev = 0.072/0.078/0.085/0.011 ms\n
"},{"location":"tutorials/tutorial1.html#using-tcpdump","title":"Using Tcpdump","text":"

Using tcpdump can be very beneficial for examining a network. You can verify traffic being sent/received among many other uses.

  • Make sure the CORE daemon is running a terminal, if not already
    sudop core-daemon\n
  • In another terminal run the GUI
    core-gui\n
  • In the GUI menu bar select File->Open..., then navigate to and select scenario.xml

  • You can now click on the Start Session button to run the scenario

  • Open a terminal on n1 by double clicking it in the GUI

  • Open a terminal on n2 by double clicking it in the GUI
  • Run the following in n2 terminal
    tcpdump -lenni eth0\n
  • Run the following in n1 terminal
    ping -c 1 10.0.0.21\n
  • You should see the following in n2 terminal
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode\nlistening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes\n10:23:04.685292 00:00:00:aa:00:00 > 00:00:00:aa:00:01, ethertype IPv4 (0x0800), length 98: 10.0.0.20 > 10.0.0.21: ICMP echo request, id 67, seq 1, length 64\n10:23:04.685329 00:00:00:aa:00:01 > 00:00:00:aa:00:00, ethertype IPv4 (0x0800), length 98: 10.0.0.21 > 10.0.0.20: ICMP echo reply, id 67, seq 1, length 64\n
"},{"location":"tutorials/tutorial1.html#editing-a-link","title":"Editing a Link","text":"

You can edit links between nodes in CORE to modify loss, delay, bandwidth, and more. This can be beneficial for understanding how software will behave in adverse conditions.

  • Make sure the CORE daemon is running a terminal, if not already
    sudop core-daemon\n
  • In another terminal run the GUI
    core-gui\n
  • In the GUI menu bar select File->Open..., then navigate to and select scenario.xml

  • You can now click on the Start Session button to run the scenario

  • Right click the link between n1 and n2

  • Select Configure

  • Update the loss to 25

  • Open a terminal on n1 by double clicking it in the GUI

  • Run the following in n1 terminal
    ping -c 10 10.0.0.21\n
  • You should see something similar for the summary output, reflecting the change in loss
    --- 10.0.0.21 ping statistics ---\n10 packets transmitted, 6 received, 40% packet loss, time 9000ms\nrtt min/avg/max/mdev = 0.077/0.093/0.108/0.016 ms\n
  • Remember that the loss above is compounded, since a ping and the loss applied occurs in both directions
"},{"location":"tutorials/tutorial1.html#running-software","title":"Running Software","text":"

We will now leverage the installed Chat App software to stand up a server and client within the nodes of our scenario.

  • Make sure the CORE daemon is running a terminal, if not already
    sudop core-daemon\n
  • In another terminal run the GUI
    core-gui\n
  • In the GUI menu bar select File->Open..., then navigate to and select scenario.xml

  • You can now click on the Start Session button to run the scenario

  • Open a terminal on n1 by double clicking it in the GUI

  • Run the following in n1 terminal
    export PATH=$PATH:/usr/local/bin\nchatapp-server\n
  • Open a terminal on n2 by double clicking it in the GUI
  • Run the following in n2 terminal
    export PATH=$PATH:/usr/local/bin\nchatapp-client -a 10.0.0.20\n
  • You will see the following output in n1 terminal
    chat server listening on: :9001\n[server] 10.0.0.21:44362 joining\n
  • Type the following in n2 terminal and hit enter
    hello world\n
  • You will see the following output in n1 terminal
    chat server listening on: :9001\n[server] 10.0.0.21:44362 joining\n[10.0.0.21:44362] hello world\n
"},{"location":"tutorials/tutorial1.html#tailing-a-log","title":"Tailing a Log","text":"

In this case we are using the service based scenario. This will automatically start and run the Chat App Server on n1 and log to a file. This case will demonstrate using tail -f to observe the output of running software.

  • Make sure the CORE daemon is running a terminal, if not already
    sudop core-daemon\n
  • In another terminal run the GUI
    core-gui\n
  • In the GUI menu bar select File->Open..., then navigate to and select scenario_service.xml

  • You can now click on the Start Session button to run the scenario

  • Open a terminal on n1 by double clicking it in the GUI

  • Run the following in n1 terminal
    tail -f chatapp.log\n
  • Open a terminal on n2 by double clicking it in the GUI
  • Run the following in n2 terminal
    export PATH=$PATH:/usr/local/bin\nchatapp-client -a 10.0.0.20\n
  • You will see the following output in n1 terminal
    chat server listening on: :9001\n[server] 10.0.0.21:44362 joining\n
  • Type the following in n2 terminal and hit enter
    hello world\n
  • You will see the following output in n1 terminal
    chat server listening on: :9001\n[server] 10.0.0.21:44362 joining\n[10.0.0.21:44362] hello world\n
"},{"location":"tutorials/tutorial1.html#grpc-python-scripts","title":"gRPC Python Scripts","text":"

You can also run the same steps above, using the provided gRPC script versions of scenarios. Below are the steps to run and join one of these scenario, then you can continue with the remaining steps of a given section.

  1. Make sure the CORE daemon is running a terminal, if not already
    sudop core-daemon\n
  2. From another terminal run the tutorial python script, which will create a session to join
    /opt/core/venv/bin/python scenario.py\n
  3. In another terminal run the CORE GUI
    core-gui\n
  4. You will be presented with sessions to join, select the one created by the script

"},{"location":"tutorials/tutorial2.html","title":"Tutorial 2 - Wireless Network","text":""},{"location":"tutorials/tutorial2.html#overview","title":"Overview","text":"

This tutorial will cover the use of a 3 node scenario in CORE. Then running a chat server on one node and a chat client on the other. The client will send a simple message and the server will log receipt of the message.

"},{"location":"tutorials/tutorial2.html#files","title":"Files","text":"

Below is the list of files used for this tutorial.

  • scenario.xml - 3 node CORE xml scenario file (wireless)
  • scenario.py - 3 node CORE gRPC python script (wireless)
"},{"location":"tutorials/tutorial2.html#running-with-the-xml-scenario-file","title":"Running with the XML Scenario File","text":"

This section will cover running this sample tutorial using the XML scenario file, leveraging an NS2 mobility file.

  • Make sure the core-daemon is running a terminal
    sudop core-daemon\n
  • In another terminal run the GUI
    core-gui\n
  • In the GUI menu bar select File->Open...
  • Navigate to and select this tutorials scenario.xml file
  • You can now click play to start the session

  • Note that OSPF routing protocol is included in the scenario to provide routes to other nodes, as they are discovered

  • Double click node n4 to open a terminal and ping node n2
    ping  -c 2 10.0.0.2\nPING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.\n64 bytes from 10.0.0.2: icmp_seq=1 ttl=63 time=20.2 ms\n64 bytes from 10.0.0.2: icmp_seq=2 ttl=63 time=20.2 ms\n\n--- 10.0.0.2 ping statistics ---\n2 packets transmitted, 2 received, 0% packet loss, time 1000ms\nrtt min/avg/max/mdev = 20.168/20.173/20.178/0.005 ms\n
"},{"location":"tutorials/tutorial2.html#configuring-delay","title":"Configuring Delay","text":"
  • Right click on the wlan1 node and select WLAN Config, then set delay to 500000

  • Using the open terminal for node n4, ping n2 again, expect about 2 seconds delay

    ping -c 5 10.0.0.2\n64 bytes from 10.0.0.2: icmp_seq=1 ttl=63 time=2001 ms\n64 bytes from 10.0.0.2: icmp_seq=2 ttl=63 time=2000 ms\n64 bytes from 10.0.0.2: icmp_seq=3 ttl=63 time=2000 ms\n64 bytes from 10.0.0.2: icmp_seq=4 ttl=63 time=2000 ms\n64 bytes from 10.0.0.2: icmp_seq=5 ttl=63 time=2000 ms\n\n--- 10.0.0.2 ping statistics ---\n5 packets transmitted, 5 received, 0% packet loss, time 4024ms\nrtt min/avg/max/mdev = 2000.176/2000.438/2001.166/0.376 ms, pipe 2\n

"},{"location":"tutorials/tutorial2.html#configure-loss","title":"Configure Loss","text":"
  • Right click on the wlan1 node and select WLAN Config, set delay back to 5000 and loss to 10

  • Using the open terminal for node n4, ping n2 again, expect to notice considerable loss

    ping  -c 10 10.0.0.2\nPING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.\n64 bytes from 10.0.0.2: icmp_seq=1 ttl=63 time=20.4 ms\n64 bytes from 10.0.0.2: icmp_seq=2 ttl=63 time=20.5 ms\n64 bytes from 10.0.0.2: icmp_seq=3 ttl=63 time=20.2 ms\n64 bytes from 10.0.0.2: icmp_seq=4 ttl=63 time=20.8 ms\n64 bytes from 10.0.0.2: icmp_seq=5 ttl=63 time=21.9 ms\n64 bytes from 10.0.0.2: icmp_seq=8 ttl=63 time=22.7 ms\n64 bytes from 10.0.0.2: icmp_seq=9 ttl=63 time=22.4 ms\n64 bytes from 10.0.0.2: icmp_seq=10 ttl=63 time=20.3 ms\n\n--- 10.0.0.2 ping statistics ---\n10 packets transmitted, 8 received, 20% packet loss, time 9064ms\nrtt min/avg/max/mdev = 20.188/21.143/22.717/0.967 ms\n

  • Make sure to set loss back to 0 when done
"},{"location":"tutorials/tutorial2.html#running-with-the-grpc-python-script","title":"Running with the gRPC Python Script","text":"

This section will cover running this sample tutorial using the gRPC python script and providing mobility over the gRPC interface.

  • Make sure the core-daemon is running a terminal
    sudop core-daemon\n
  • In another terminal run the GUI
    core-gui\n
  • From another terminal run the scenario.py script
    /opt/core/venv/bin/python scenario.py\n
  • In the GUI dialog box select the session and click connect
  • You will now have joined the already running scenario

"},{"location":"tutorials/tutorial2.html#running-software","title":"Running Software","text":"

We will now leverage the installed Chat App software to stand up a server and client within the nodes of our scenario. You can use the bases of the running scenario from either scenario.xml or the scenario.py gRPC script.

  • In the GUI double click on node n4, this will bring up a terminal for this node
  • In the n4 terminal, run the server
    export PATH=$PATH:/usr/local/bin\nchatapp-server\n
  • In the GUI double click on node n2, this will bring up a terminal for this node
  • In the n2 terminal, run the client
    export PATH=$PATH:/usr/local/bin\nchatapp-client -a 10.0.0.4\n
  • This will result in n2 connecting to the server
  • In the n2 terminal, type a message at the client prompt
    >>hello world\n
  • Observe that text typed at client then appears in the terminal of n4
    chat server listening on: :9001\n[server] 10.0.0.2:53684 joining\n[10.0.0.2:53684] hello world\n
"},{"location":"tutorials/tutorial3.html","title":"Tutorial 3 - Basic Mobility","text":""},{"location":"tutorials/tutorial3.html#overview","title":"Overview","text":"

This tutorial will cover using a 3 node scenario in CORE with basic mobility. Mobility can be provided from a NS2 file or by including mobility commands in a gRPC script.

"},{"location":"tutorials/tutorial3.html#files","title":"Files","text":"

Below is the list of files used for this tutorial.

  • movements1.txt - a NS2 mobility input file
  • scenario.xml - 3 node CORE xml scenario file (wireless)
  • scenario.py - 3 node CORE gRPC python script (wireless)
  • printout.py - event listener
"},{"location":"tutorials/tutorial3.html#running-with-xml-file-using-ns2-movement","title":"Running with XML file using NS2 Movement","text":"

This section will cover running this sample tutorial using the XML scenario file, leveraging an NS2 file for mobility.

  • Make sure the core-daemon is running a terminal
    sudop core-daemon\n
  • In another terminal run the GUI
    core-gui\n
  • Observe the format of the N2 file, cat movements1.txt. Note that this file was manually developed.
    $node_(1) set X_ 208.1\n$node_(1) set Y_ 211.05\n$node_(1) set Z_ 0\n$ns_ at 0.0 \"$node_(1) setdest 208.1 211.05 0.00\"\n$node_(2) set X_ 393.1\n$node_(2) set Y_ 223.05\n$node_(2) set Z_ 0\n$ns_ at 0.0 \"$node_(2) setdest 393.1 223.05 0.00\"\n$node_(4) set X_ 499.1\n$node_(4) set Y_ 186.05\n$node_(4) set Z_ 0\n$ns_ at 0.0 \"$node_(4) setdest 499.1 186.05 0.00\"\n$ns_ at 1.0 \"$node_(1) setdest 190.1 225.05 0.00\"\n$ns_ at 1.0 \"$node_(2) setdest 393.1 225.05 0.00\"\n$ns_ at 1.0 \"$node_(4) setdest 515.1 186.05 0.00\"\n$ns_ at 2.0 \"$node_(1) setdest 175.1 250.05 0.00\"\n$ns_ at 2.0 \"$node_(2) setdest 393.1 250.05 0.00\"\n$ns_ at 2.0 \"$node_(4) setdest 530.1 186.05 0.00\"\n$ns_ at 3.0 \"$node_(1) setdest 160.1 275.05 0.00\"\n$ns_ at 3.0 \"$node_(2) setdest 393.1 275.05 0.00\"\n$ns_ at 3.0 \"$node_(4) setdest 530.1 186.05 0.00\"\n$ns_ at 4.0 \"$node_(1) setdest 160.1 300.05 0.00\"\n$ns_ at 4.0 \"$node_(2) setdest 393.1 300.05 0.00\"\n$ns_ at 4.0 \"$node_(4) setdest 550.1 186.05 0.00\"\n$ns_ at 5.0 \"$node_(1) setdest 160.1 275.05 0.00\"\n$ns_ at 5.0 \"$node_(2) setdest 393.1 275.05 0.00\"\n$ns_ at 5.0 \"$node_(4) setdest 530.1 186.05 0.00\"\n$ns_ at 6.0 \"$node_(1) setdest 175.1 250.05 0.00\"\n$ns_ at 6.0 \"$node_(2) setdest 393.1 250.05 0.00\"\n$ns_ at 6.0 \"$node_(4) setdest 515.1 186.05 0.00\"\n$ns_ at 7.0 \"$node_(1) setdest 190.1 225.05 0.00\"\n$ns_ at 7.0 \"$node_(2) setdest 393.1 225.05 0.00\"\n$ns_ at 7.0 \"$node_(4) setdest 499.1 186.05 0.00\"\n
  • In the GUI menu bar select File->Open..., and select this tutorials scenario.xml file
  • You can now click play to start the session
  • Select the play button on the Mobility Player to start mobility
  • Observe movement of the nodes
  • Note that OSPF routing protocol is included in the scenario to build routing table so that routes to other nodes are known and when the routes are discovered, ping will work

"},{"location":"tutorials/tutorial3.html#running-with-the-grpc-script","title":"Running with the gRPC Script","text":"

This section covers using a gRPC script to create and provide scenario movement.

  • Make sure the core-daemon is running a terminal
    sudop core-daemon\n
  • From another terminal run the scenario.py script
    /opt/core/venv/bin/python scenario.py\n
  • In another terminal run the GUI
    core-gui\n
  • In the GUI dialog box select the session and click connect
  • You will now have joined the already running scenario
  • In the terminal running the scenario.py, hit a key to start motion

  • Observe the link between n3 and n4 is shown and then as motion continues the link breaks

"},{"location":"tutorials/tutorial3.html#running-the-chat-app-software","title":"Running the Chat App Software","text":"

This section covers using one of the above 2 scenarios to run software within the nodes.

  • In the GUI double click on n4, this will bring up a terminal for this node
  • in the n4 terminal, run the server
    export PATH=$PATH:/usr/local/bin\nchatapp-server\n
  • In the GUI double click on n2, this will bring up a terminal for this node
  • In the n2 terminal, run the client
    export PATH=$PATH:/usr/local/bin\nchatapp-client -a 10.0.0.4\n
  • This will result in n2 connecting to the server
  • In the n2 terminal, type a message at the client prompt and hit enter
    >>hello world\n
  • Observe that text typed at client then appears in the server terminal
    chat server listening on: :9001\n[server] 10.0.0.2:53684 joining\n[10.0.0.2:53684] hello world\n
"},{"location":"tutorials/tutorial3.html#running-mobility-from-a-node","title":"Running Mobility from a Node","text":"

This section provides an example for running a script within a node, that leverages a control network in CORE for issuing mobility using the gRPC API.

  • Edit the following line in /etc/core/core.conf
    grpcaddress = 0.0.0.0\n
  • Start the scenario from the scenario.xml
  • From the GUI open Session -> Options and set Control Network to 172.16.0.0/24
  • Click to play the scenario
  • Double click on n2 to get a terminal window
  • From the terminal window for n2, run the script
    /opt/core/venv/bin/python move-node2.py\n
  • Observe that node 2 moves and continues to move

"},{"location":"tutorials/tutorial4.html","title":"Tutorial 4 - Tests","text":""},{"location":"tutorials/tutorial4.html#overview","title":"Overview","text":"

A use case for CORE would be to help automate integration tests for running software within a network. This tutorial covers using CORE with the python pytest testing framework. It will show how you can define tests, for different use cases to validate software and outcomes within a defined network. Using pytest, you would create tests using all the standard pytest functionality. Creating a test file, and then defining test functions to run. For these tests, we are leveraging the CORE library directly and the API it provides.

Refer to the pytest documentation for indepth information on how to write tests with pytest.

"},{"location":"tutorials/tutorial4.html#files","title":"Files","text":"

A directory is used for containing your tests. Within this directory we need a conftest.py, which pytest will pick up to help define and provide test fixtures, which will be leveraged within our tests.

  • tests
    • conftest.py - file used by pytest to define fixtures, which can be shared across tests
    • test_ping.py - defines test classes/functions to run
"},{"location":"tutorials/tutorial4.html#test-fixtures","title":"Test Fixtures","text":"

Below are the definitions for fixture you can define to facilitate and make creating CORE based tests easier.

The global session fixture creates one CoreEmu object for the entire test session, yields it for testing, and calls shutdown when everything is over.

@pytest.fixture(scope=\"session\")\ndef global_session():\n    core = CoreEmu()\n    session = core.create_session()\n    session.set_state(EventTypes.CONFIGURATION_STATE)\n    yield session\n    core.shutdown()\n

The regular session fixture leverages the global session fixture. It will set the correct state for each test case, yield the session for a test, and then clear the session after a test finishes to prepare for the next test.

@pytest.fixture\ndef session(global_session):\n    global_session.set_state(EventTypes.CONFIGURATION_STATE)\n    yield global_session\n    global_session.clear()\n

The ip prefixes fixture help provide a preconfigured convenience for creating and assigning interfaces to nodes, when creating your network within a test. The address subnet can be whatever you desire.

@pytest.fixture(scope=\"session\")\ndef ip_prefixes():\n    return IpPrefixes(ip4_prefix=\"10.0.0.0/24\")\n
"},{"location":"tutorials/tutorial4.html#test-functions","title":"Test Functions","text":"

Within a pytest test file, you have the freedom to create any kind of test you like, but they will all follow a similar formula.

  • define a test function that will leverage the session and ip prefixes fixtures
  • then create a network to test, using the session fixture
  • run commands within nodes as desired, to test out your use case
  • validate command result or output for expected behavior to pass or fail

In the test below, we create a simple 2 node wired network and validate node1 can ping node2 successfully.

def test_success(self, session: Session, ip_prefixes: IpPrefixes):\n    # create nodes\n    node1 = session.add_node(CoreNode)\n    node2 = session.add_node(CoreNode)\n\n    # link nodes together\n    iface1_data = ip_prefixes.create_iface(node1)\n    iface2_data = ip_prefixes.create_iface(node2)\n    session.add_link(node1.id, node2.id, iface1_data, iface2_data)\n\n    # ping node, expect a successful command\n    node1.cmd(f\"ping -c 1 {iface2_data.ip4}\")\n
"},{"location":"tutorials/tutorial4.html#install-pytest","title":"Install Pytest","text":"

Since we are running an automated test within CORE, we will need to install pytest within the python interpreter used by CORE.

sudo /opt/core/venv/bin/python -m pip install pytest\n
"},{"location":"tutorials/tutorial4.html#running-tests","title":"Running Tests","text":"

You can run your own or the provided tests, by running the following.

cd <test directory>\nsudo /opt/core/venv/bin/python -m pytest -v\n

If you run the provided tests, you would expect to see the two tests running and passing.

tests/test_ping.py::TestPing::test_success PASSED                                [ 50%]\ntests/test_ping.py::TestPing::test_failure PASSED                                [100%]\n
"},{"location":"tutorials/tutorial5.html","title":"Tutorial 5 - RJ45 Node","text":""},{"location":"tutorials/tutorial5.html#overview","title":"Overview","text":"

This tutorial will cover connecting CORE VM to a Windows host machine using a RJ45 node.

"},{"location":"tutorials/tutorial5.html#files","title":"Files","text":"

Below is the list of files used for this tutorial.

  • scenario.xml - the scenario with RJ45 unassigned
  • scenario.py- grpc script to create the RJ45 in simple CORE scenario
  • client_for_windows.py - chat app client modified for windows
"},{"location":"tutorials/tutorial5.html#running-with-the-saved-xml-file","title":"Running with the Saved XML File","text":"

This section covers using the saved scenario.xml file to get and up and running.

  • Configure the Windows host VM to have a bridged network adapter

  • Make sure the core-daemon is running in a terminal

    sudop core-daemon\n

  • In another terminal run the GUI
    core-gui\n
  • Open the scenario.xml with the unassigned RJ45 node

  • Configure the RJ45 node name to use the bridged interface

  • After configuring the RJ45, run the scenario:

  • Double click node n1 to open a terminal and add a route to the Windows host

    ip route add 192.168.0.0/24 via 10.0.0.20\n

  • On the Windows host using Windows command prompt with administrator privilege, add a route that uses the interface connected to the associated interface assigned to the RJ45 node
    # if enp0s3 is ssigned 192.168.0.6/24\nroute add 10.0.0.0 mask 255.255.255.0 192.168.0.6\n
  • Now you should be able to ping from the Windows host to n1
    C:\\WINDOWS\\system32>ping 10.0.0.20\n\nPinging 10.0.0.20 with 32 bytes of data:\nReply from 10.0.0.20: bytes=32 time<1ms TTL=64\nReply from 10.0.0.20: bytes=32 time<1ms TTL=64\nReply from 10.0.0.20: bytes=32 time<1ms TTL=64\nReply from 10.0.0.20: bytes=32 time<1ms TTL=64\n\nPing statistics for 10.0.0.20:\n    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss)\nApproximate round trip times in milli-seconds:\n    Minimum = 0ms, Maximum = 0ms, Average = 0ms\n
  • After pinging successfully, run the following in the n1 terminal to start the chatapp server
    export PATH=$PATH:/usr/local/bin\nchatapp-server\n
  • On the Windows host, run the client_for_windows.py
    python3 client_for_windows.py -a 10.0.0.20\nconnected to server(10.0.0.20:9001) as client(192.168.0.6:49960)\n>> .Hello WORLD\n.Hello WORLD Again\n.\n
  • Observe output on n1
    chat server listening on: :9001\n[server] 192.168.0.6:49960 joining\n[192.168.0.6:49960] Hello WORLD\n[192.168.0.6:49960] Hello WORLD Again\n
  • When finished, you can stop the CORE scenario and cleanup
  • On the Windows host remove the added route
    route delete 10.0.0.0\n
"},{"location":"tutorials/tutorial5.html#running-with-the-grpc-script","title":"Running with the gRPC Script","text":"

This section covers leveraging the gRPC script to get up and running.

  • Configure the Windows host VM to have a bridged network adapter

  • Make sure the core-daemon is running in a terminal

    sudop core-daemon\n

  • In another terminal run the GUI
    core-gui\n
  • Run the gRPC script in the VM
    # use the desired interface name, in this case enp0s3\n/opt/core/venv/bin/python scenario.py enp0s3\n
  • In the core-gui connect to the running session that was created

  • Double click node n1 to open a terminal and add a route to the Windows host

    ip route add 192.168.0.0/24 via 10.0.0.20\n

  • On the Windows host using Windows command prompt with administrator privilege, add a route that uses the interface connected to the associated interface assigned to the RJ45 node
    # if enp0s3 is ssigned 192.168.0.6/24\nroute add 10.0.0.0 mask 255.255.255.0 192.168.0.6\n
  • Now you should be able to ping from the Windows host to n1
    C:\\WINDOWS\\system32>ping 10.0.0.20\n\nPinging 10.0.0.20 with 32 bytes of data:\nReply from 10.0.0.20: bytes=32 time<1ms TTL=64\nReply from 10.0.0.20: bytes=32 time<1ms TTL=64\nReply from 10.0.0.20: bytes=32 time<1ms TTL=64\nReply from 10.0.0.20: bytes=32 time<1ms TTL=64\n\nPing statistics for 10.0.0.20:\n    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss)\nApproximate round trip times in milli-seconds:\n    Minimum = 0ms, Maximum = 0ms, Average = 0ms\n
  • After pinging successfully, run the following in the n1 terminal to start the chatapp server
    export PATH=$PATH:/usr/local/bin\nchatapp-server\n
  • On the Windows host, run the client_for_windows.py
    python3 client_for_windows.py -a 10.0.0.20\nconnected to server(10.0.0.20:9001) as client(192.168.0.6:49960)\n>> .Hello WORLD\n.Hello WORLD Again\n.\n
  • Observe output on n1
    chat server listening on: :9001\n[server] 192.168.0.6:49960 joining\n[192.168.0.6:49960] Hello WORLD\n[192.168.0.6:49960] Hello WORLD Again\n
  • When finished, you can stop the CORE scenario and cleanup
  • On the Windows host remove the added route
    route delete 10.0.0.0\n
"},{"location":"tutorials/tutorial6.html","title":"Tutorial 6 - Improved Visuals","text":""},{"location":"tutorials/tutorial6.html#overview","title":"Overview","text":"

This tutorial will cover changing the node icons, changing the background, and changing or hiding links.

"},{"location":"tutorials/tutorial6.html#files","title":"Files","text":"

Below is the list of files used for this tutorial.

  • drone.png - icon for a drone
  • demo.py - a mobility script for a node
  • terrain.png - a background
  • completed-scenario.xml - the scenario after making all changes below
"},{"location":"tutorials/tutorial6.html#running-this-tutorial","title":"Running this Tutorial","text":"

This section will cover running this sample tutorial that develops a scenario file.

  • Ensure that /etc/core/core.conf has grpcaddress set to 0.0.0.0
  • Make sure the core-daemon is running in a terminal
    sudop core-daemon\n
  • In another terminal run the GUI
    core-gui\n
"},{"location":"tutorials/tutorial6.html#changing-node-icons","title":"Changing Node Icons","text":"
  • Create three MDR nodes

  • Double click on each node for configuration, click the icon and set it to use the drone.png image

  • Use Session -> Options and set Control Network 0 to 172.16.0.0./24

"},{"location":"tutorials/tutorial6.html#linking-nodes-to-wlan","title":"Linking Nodes to WLAN","text":"
  • Add a WLAN Node
  • Link the three prior MDR nodes to the WLAN node

  • Click play to start the scenario

  • Observe wireless links being created

  • Click stop to end the scenario

  • Right click the WLAN node and select Edit -> Hide
  • Now you can view the nodes in isolation

"},{"location":"tutorials/tutorial6.html#changing-canvas-background","title":"Changing Canvas Background","text":"
  • Click Canvas -> Wallpaper to set the background to terrain.png

  • Click play to start the scenario again

  • You now have a scenario with drone icons, terrain background, links displayed and hidden WLAN node

"},{"location":"tutorials/tutorial6.html#adding-mobility","title":"Adding Mobility","text":"
  • Open and play the completed-scenario.xml
  • Double click on n1 and run the demo.py script
    # node id is first parameter, second is total nodes\n/opt/core/venv/bin/python demo.py 1 3\n
  • Let it run to see the link break as the node 1 drone approches the right side

  • Repeat for other nodes, double click on n2 and n3 and run the demo.py script

    # n2\n/opt/core/venv/bin/python demo.py 2 3\n# n3\n/opt/core/venv/bin/python demo.py 3 3\n

  • You can turn off wireless links via View -> Wireless Links
  • Observe nodes moving in parallel tracks, when the far right is reached, the node will move down and then move to the left. When the far left is reached, the drone will move down and then move to the right.

"},{"location":"tutorials/tutorial7.html","title":"Tutorial 7 - EMANE","text":""},{"location":"tutorials/tutorial7.html#overview","title":"Overview","text":"

This tutorial will cover basic usage and some concepts one may want to use or leverage when working with and creating EMANE based networks.

For more detailed information on EMANE see the following:

  • EMANE in CORE
  • EMANE Wiki
"},{"location":"tutorials/tutorial7.html#files","title":"Files","text":"

Below is a list of the files used for this tutorial.

  • 2 node EMANE ieee80211abg scenario
    • scenario.xml
    • scenario.py
  • 2 node EMANE ieee80211abg scenario, with n2 running the \"Chat App Server\" service
    • scenario_service.xml
    • scenario_service.py
"},{"location":"tutorials/tutorial7.html#running-this-tutorial","title":"Running this Tutorial","text":"

This section covers interactions that can be carried out for this scenario.

Our scenario has the following nodes and addresses:

  • emane1 - no address, this is a representative node for the EMANE network
  • n2 - 10.0.0.1
  • n3 - 10.0.0.2

All usages below assume a clean scenario start.

"},{"location":"tutorials/tutorial7.html#using-ping","title":"Using Ping","text":"

Using the command line utility ping can be a good way to verify connectivity between nodes in CORE.

  • Make sure the CORE daemon is running a terminal, if not already
    sudop core-daemon\n
  • In another terminal run the GUI
    core-gui\n
  • In the GUI menu bar select File->Open..., then navigate to and select scenario.xml

  • You can now click on the Start Session button to run the scenario

  • Open a terminal on n2 by double clicking it in the GUI

  • Run the following in n2 terminal
    ping -c 3 10.0.0.2\n
  • You should see the following output
    PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.\n64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=7.93 ms\n64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=3.07 ms\n64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=3.05 ms\n\n--- 10.0.0.2 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2000ms\nrtt min/avg/max/mdev = 3.049/4.685/7.932/2.295 ms\n
"},{"location":"tutorials/tutorial7.html#using-tcpdump","title":"Using Tcpdump","text":"

Using tcpdump can be very beneficial for examining a network. You can verify traffic being sent/received among many other uses.

  • Make sure the CORE daemon is running a terminal, if not already
    sudop core-daemon\n
  • In another terminal run the GUI
    core-gui\n
  • In the GUI menu bar select File->Open..., then navigate to and select scenario.xml

  • You can now click on the Start Session button to run the scenario

  • Open a terminal on n2 by double clicking it in the GUI

  • Open a terminal on n3 by double clicking it in the GUI
  • Run the following in n3 terminal
    tcpdump -lenni eth0\n
  • Run the following in n2 terminal
    ping -c 1 10.0.0.2\n
  • You should see the following in n2 terminal
    tcpdump: verbose output suppressed, use -v[v]... for full protocol decode\nlistening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes\n14:56:25.414283 02:02:00:00:00:01 > 02:02:00:00:00:02, ethertype IPv4 (0x0800), length 98: 10.0.0.1 > 10.0.0.2: ICMP echo request, id 64832, seq 1, length 64\n14:56:25.414303 02:02:00:00:00:02 > 02:02:00:00:00:01, ethertype IPv4 (0x0800), length 98: 10.0.0.2 > 10.0.0.1: ICMP echo reply, id 64832, seq 1, length 64\n
"},{"location":"tutorials/tutorial7.html#running-software","title":"Running Software","text":"

We will now leverage the installed Chat App software to stand up a server and client within the nodes of our scenario.

  • Make sure the CORE daemon is running a terminal, if not already
    sudop core-daemon\n
  • In another terminal run the GUI
    core-gui\n
  • In the GUI menu bar select File->Open..., then navigate to and select scenario.xml

  • You can now click on the Start Session button to run the scenario

  • Open a terminal on n2 by double clicking it in the GUI

  • Run the following in n2 terminal
    export PATH=$PATH:/usr/local/bin\nchatapp-server\n
  • Open a terminal on n3 by double clicking it in the GUI
  • Run the following in n3 terminal
    export PATH=$PATH:/usr/local/bin\nchatapp-client -a 10.0.0.1\n
  • You will see the following output in n1 terminal
    chat server listening on: :9001\n[server] 10.0.0.1:44362 joining\n
  • Type the following in n2 terminal and hit enter
    hello world\n
  • You will see the following output in n1 terminal
    chat server listening on: :9001\n[server] 10.0.0.2:44362 joining\n[10.0.0.2:44362] hello world\n
"},{"location":"tutorials/tutorial7.html#tailing-a-log","title":"Tailing a Log","text":"

In this case we are using the service based scenario. This will automatically start and run the Chat App Server on n2 and log to a file. This case will demonstrate using tail -f to observe the output of running software.

  • Make sure the CORE daemon is running a terminal, if not already
    sudop core-daemon\n
  • In another terminal run the GUI
    core-gui\n
  • In the GUI menu bar select File->Open..., then navigate to and select scenario_service.xml

  • You can now click on the Start Session button to run the scenario

  • Open a terminal on n2 by double clicking it in the GUI

  • Run the following in n2 terminal
    tail -f chatapp.log\n
  • Open a terminal on n3 by double clicking it in the GUI
  • Run the following in n3 terminal
    export PATH=$PATH:/usr/local/bin\nchatapp-client -a 10.0.0.1\n
  • You will see the following output in n2 terminal
    chat server listening on: :9001\n[server] 10.0.0.2:44362 joining\n
  • Type the following in n3 terminal and hit enter
    hello world\n
  • You will see the following output in n2 terminal
    chat server listening on: :9001\n[server] 10.0.0.2:44362 joining\n[10.0.0.2:44362] hello world\n
"},{"location":"tutorials/tutorial7.html#advanced-topics","title":"Advanced Topics","text":"

This section will cover some high level topics and examples for running and using EMANE in CORE. You can find more detailed tutorials and examples at the EMANE Tutorial.

Note

Every topic below assumes CORE, EMANE, and OSPF MDR have been installed.

Scenario files to support the EMANE topics below will be found in the GUI default directory for opening XML files.

Topic Model Description XML Files RF Pipe Overview of generated XML files used to drive EMANE GPSD RF Pipe Overview of running and integrating gpsd with EMANE Precomputed RF Pipe Overview of using the precomputed propagation model EEL RF Pipe Overview of using the Emulation Event Log (EEL) Generator Antenna Profiles RF Pipe Overview of using antenna profiles in EMANE"},{"location":"tutorials/tutorial7.html#grpc-python-scripts","title":"gRPC Python Scripts","text":"

You can also run the same steps above, using the provided gRPC script versions of scenarios. Below are the steps to run and join one of these scenario, then you can continue with the remaining steps of a given section.

  1. Make sure the CORE daemon is running a terminal, if not already
    sudop core-daemon\n
  2. From another terminal run the tutorial python script, which will create a session to join
    /opt/core/venv/bin/python scenario.py\n
  3. In another terminal run the CORE GUI
    core-gui\n
  4. You will be presented with sessions to join, select the one created by the script

"},{"location":"tutorials/common/grpc.html","title":"Grpc","text":""},{"location":"tutorials/common/grpc.html#grpc-python-scripts","title":"gRPC Python Scripts","text":"

You can also run the same steps above, using the provided gRPC script versions of scenarios. Below are the steps to run and join one of these scenario, then you can continue with the remaining steps of a given section.

  1. Make sure the CORE daemon is running a terminal, if not already
    sudop core-daemon\n
  2. From another terminal run the tutorial python script, which will create a session to join
    /opt/core/venv/bin/python scenario.py\n
  3. In another terminal run the CORE GUI
    core-gui\n
  4. You will be presented with sessions to join, select the one created by the script

"}]} \ No newline at end of file diff --git a/services.html b/services.html new file mode 100644 index 00000000..92eb5687 --- /dev/null +++ b/services.html @@ -0,0 +1,1780 @@ + + + + + + + + + + + + + + + + + + + + + + Services (Deprecated) - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Services (Deprecated)

+

Overview

+

CORE uses the concept of services to specify what processes or scripts run on a +node when it is started. Layer-3 nodes such as routers and PCs are defined by +the services that they run.

+

Services may be customized for each node, or new custom services can be +created. New node types can be created each having a different name, icon, and +set of default services. Each service defines the per-node directories, +configuration files, startup index, starting commands, validation commands, +shutdown commands, and meta-data associated with a node.

+
+

Note

+

Network namespace nodes do not undergo the normal Linux boot process +using the init, upstart, or systemd frameworks. These +lightweight nodes use configured CORE services.

+
+

Available Services

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Service GroupServices
BIRDBGP, OSPF, RADV, RIP, Static
EMANETransport Service
FRRBABEL, BGP, OSPFv2, OSPFv3, PIMD, RIP, RIPNG, Zebra
NRLarouted, MGEN Sink, MGEN Actor, NHDP, OLSR, OLSRORG, OLSRv2, SMF
QuaggaBABEL, BGP, OSPFv2, OSPFv3, OSPFv3 MDR, RIP, RIPNG, XPIMD, Zebra
SDNOVS, RYU
SecurityFirewall, IPsec, NAT, VPN Client, VPN Server
UtilityATD, Routing Utils, DHCP, FTP, IP Forward, PCAP, RADVD, SSF, UCARP
XORPBGP, OLSR, OSPFv2, OSPFv3, PIMSM4, PIMSM6, RIP, RIPNG, Router Manager
+

Node Types and Default Services

+

Here are the default node types and their services:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Node TypeServices
routerzebra, OSFPv2, OSPFv3, and IPForward services for IGP link-state routing.
hostDefaultRoute and SSH services, representing an SSH server having a default route when connected directly to a router.
PCDefaultRoute service for having a default route when connected directly to a router.
mdrzebra, OSPFv3MDR, and IPForward services for wireless-optimized MANET Designated Router routing.
proutera physical router, having the same default services as the router node type; for incorporating Linux testbed machines into an emulation.
+

Configuration files can be automatically generated by each service. For +example, CORE automatically generates routing protocol configuration for the +router nodes in order to simplify the creation of virtual networks.

+

To change the services associated with a node, double-click on the node to +invoke its configuration dialog and click on the Services... button, +or right-click a node a choose Services... from the menu. +Services are enabled or disabled by clicking on their names. The button next to +each service name allows you to customize all aspects of this service for this +node. For example, special route redistribution commands could be inserted in +to the Quagga routing configuration associated with the zebra service.

+

To change the default services associated with a node type, use the Node Types +dialog available from the Edit button at the end of the Layer-3 nodes +toolbar, or choose Node types... from the Session menu. Note that +any new services selected are not applied to existing nodes if the nodes have +been customized.

+

Customizing a Service

+

A service can be fully customized for a particular node. From the node's +configuration dialog, click on the button next to the service name to invoke +the service customization dialog for that service. +The dialog has three tabs for configuring the different aspects of the service: +files, directories, and startup/shutdown.

+
+

Note

+

A yellow customize icon next to a service indicates that service +requires customization (e.g. the Firewall service). +A green customize icon indicates that a custom configuration exists. +Click the Defaults button when customizing a service to remove any +customizations.

+
+

The Files tab is used to display or edit the configuration files or scripts that +are used for this service. Files can be selected from a drop-down list, and +their contents are displayed in a text entry below. The file contents are +generated by the CORE daemon based on the network topology that exists at +the time the customization dialog is invoked.

+

The Directories tab shows the per-node directories for this service. For the +default types, CORE nodes share the same filesystem tree, except for these +per-node directories that are defined by the services. For example, the +/var/run/quagga directory needs to be unique for each node running +the Zebra service, because Quagga running on each node needs to write separate +PID files to that directory.

+
+

Note

+

The /var/log and /var/run directories are +mounted uniquely per-node by default. +Per-node mount targets can be found in /tmp/pycore./.conf/

+
+

The Startup/shutdown tab lists commands that are used to start and stop this +service. The startup index allows configuring when this service starts relative +to the other services enabled for this node; a service with a lower startup +index value is started before those with higher values. Because shell scripts +generated by the Files tab will not have execute permissions set, the startup +commands should include the shell name, with +something like sh script.sh.

+

Shutdown commands optionally terminate the process(es) associated with this +service. Generally they send a kill signal to the running process using the +kill or killall commands. If the service does not terminate +the running processes using a shutdown command, the processes will be killed +when the vnoded daemon is terminated (with kill -9) and +the namespace destroyed. It is a good practice to +specify shutdown commands, which will allow for proper process termination, and +for run-time control of stopping and restarting services.

+

Validate commands are executed following the startup commands. A validate +command can execute a process or script that should return zero if the service +has started successfully, and have a non-zero return value for services that +have had a problem starting. For example, the pidof command will check +if a process is running and return zero when found. When a validate command +produces a non-zero return value, an exception is generated, which will cause +an error to be displayed in the Check Emulation Light.

+
+

Note

+

To start, stop, and restart services during run-time, right-click a +node and use the Services... menu.

+
+

New Services

+

Services can save time required to configure nodes, especially if a number +of nodes require similar configuration procedures. New services can be +introduced to automate tasks.

+

Leveraging UserDefined

+

The easiest way to capture the configuration of a new process into a service +is by using the UserDefined service. This is a blank service where any +aspect may be customized. The UserDefined service is convenient for testing +ideas for a service before adding a new service type.

+

Creating New Services

+
+

Note

+

The directory name used in custom_services_dir below should be unique and +should not correspond to any existing Python module name. For example, don't +use the name subprocess or services.

+
+
    +
  1. +

    Modify the example service shown below + to do what you want. It could generate config/script files, mount per-node + directories, start processes/scripts, etc. sample.py is a Python file that + defines one or more classes to be imported. You can create multiple Python + files that will be imported.

    +
  2. +
  3. +

    Put these files in a directory such as /home/<user>/.coregui/custom_services + Note that the last component of this directory name custom_services should not + be named the same as any python module, due to naming conflicts.

    +
  4. +
  5. +

    Add a custom_services_dir = /home/<user>/.coregui/custom_services entry to the + /etc/core/core.conf file.

    +
  6. +
  7. +

    Restart the CORE daemon (core-daemon). Any import errors (Python syntax) + should be displayed in the daemon output.

    +
  8. +
  9. +

    Start using your custom service on your nodes. You can create a new node + type that uses your service, or change the default services for an existing + node type, or change individual nodes.

    +
  10. +
+

If you have created a new service type that may be useful to others, please +consider contributing it to the CORE project.

+

Example Custom Service

+

Below is the skeleton for a custom service with some documentation. Most +people would likely only setup the required class variables (name/group). +Then define the configs (files they want to generate) and implement the +generate_config function to dynamically create the files wanted. Finally +the startup commands would be supplied, which typically tends to be +running the shell files generated.

+
"""
+Simple example custom service, used to drive shell commands on a node.
+"""
+from typing import Tuple
+
+from core.nodes.base import CoreNode
+from core.services.coreservices import CoreService, ServiceMode
+
+
+class ExampleService(CoreService):
+    """
+    Example Custom CORE Service
+
+    :cvar name: name used as a unique ID for this service and is required, no spaces
+    :cvar group: allows you to group services within the GUI under a common name
+    :cvar executables: executables this service depends on to function, if executable is
+        not on the path, service will not be loaded
+    :cvar dependencies: services that this service depends on for startup, tuple of
+        service names
+    :cvar dirs: directories that this service will create within a node
+    :cvar configs: files that this service will generate, without a full path this file
+        goes in the node's directory e.g. /tmp/pycore.12345/n1.conf/myfile
+    :cvar startup: commands used to start this service, any non-zero exit code will
+        cause a failure
+    :cvar validate: commands used to validate that a service was started, any non-zero
+        exit code will cause a failure
+    :cvar validation_mode: validation mode, used to determine startup success.
+        NON_BLOCKING    - runs startup commands, and validates success with validation commands
+        BLOCKING        - runs startup commands, and validates success with the startup commands themselves
+        TIMER           - runs startup commands, and validates success by waiting for "validation_timer" alone
+    :cvar validation_timer: time in seconds for a service to wait for validation, before
+        determining success in TIMER/NON_BLOCKING modes.
+    :cvar validation_period: period in seconds to wait before retrying validation,
+        only used in NON_BLOCKING mode
+    :cvar shutdown: shutdown commands to stop this service
+    """
+
+    name: str = "ExampleService"
+    group: str = "Utility"
+    executables: Tuple[str, ...] = ()
+    dependencies: Tuple[str, ...] = ()
+    dirs: Tuple[str, ...] = ()
+    configs: Tuple[str, ...] = ("myservice1.sh", "myservice2.sh")
+    startup: Tuple[str, ...] = tuple(f"sh {x}" for x in configs)
+    validate: Tuple[str, ...] = ()
+    validation_mode: ServiceMode = ServiceMode.NON_BLOCKING
+    validation_timer: int = 5
+    validation_period: float = 0.5
+    shutdown: Tuple[str, ...] = ()
+
+    @classmethod
+    def on_load(cls) -> None:
+        """
+        Provides a way to run some arbitrary logic when the service is loaded, possibly
+        to help facilitate dynamic settings for the environment.
+
+        :return: nothing
+        """
+        pass
+
+    @classmethod
+    def get_configs(cls, node: CoreNode) -> Tuple[str, ...]:
+        """
+        Provides a way to dynamically generate the config files from the node a service
+        will run. Defaults to the class definition and can be left out entirely if not
+        needed.
+
+        :param node: core node that the service is being ran on
+        :return: tuple of config files to create
+        """
+        return cls.configs
+
+    @classmethod
+    def generate_config(cls, node: CoreNode, filename: str) -> str:
+        """
+        Returns a string representation for a file, given the node the service is
+        starting on the config filename that this information will be used for. This
+        must be defined, if "configs" are defined.
+
+        :param node: core node that the service is being ran on
+        :param filename: configuration file to generate
+        :return: configuration file content
+        """
+        cfg = "#!/bin/sh\n"
+        if filename == cls.configs[0]:
+            cfg += "# auto-generated by MyService (sample.py)\n"
+            for iface in node.get_ifaces():
+                cfg += f'echo "Node {node.name} has interface {iface.name}"\n'
+        elif filename == cls.configs[1]:
+            cfg += "echo hello"
+        return cfg
+
+    @classmethod
+    def get_startup(cls, node: CoreNode) -> Tuple[str, ...]:
+        """
+        Provides a way to dynamically generate the startup commands from the node a
+        service will run. Defaults to the class definition and can be left out entirely
+        if not needed.
+
+        :param node: core node that the service is being ran on
+        :return: tuple of startup commands to run
+        """
+        return cls.startup
+
+    @classmethod
+    def get_validate(cls, node: CoreNode) -> Tuple[str, ...]:
+        """
+        Provides a way to dynamically generate the validate commands from the node a
+        service will run. Defaults to the class definition and can be left out entirely
+        if not needed.
+
+        :param node: core node that the service is being ran on
+        :return: tuple of commands to validate service startup with
+        """
+        return cls.validate
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/services/bird.html b/services/bird.html new file mode 100644 index 00000000..51fc3e09 --- /dev/null +++ b/services/bird.html @@ -0,0 +1,1385 @@ + + + + + + + + + + + + + + + + + + + + + + Bird - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

BIRD Internet Routing Daemon

+

Overview

+

The BIRD Internet Routing Daemon is a routing +daemon; i.e., a software responsible for managing kernel packet forwarding +tables. It aims to develop a dynamic IP routing daemon with full support of +all modern routing protocols, easy to use configuration interface and powerful +route filtering language, primarily targeted on (but not limited to) Linux and +other UNIX-like systems and distributed under the GNU General Public License. +BIRD has a free implementation of several well known and common routing and +router-supplemental protocols, namely RIP, RIPng, OSPFv2, OSPFv3, BGP, BFD, +and NDP/RA. BIRD supports IPv4 and IPv6 address families, Linux kernel and +several BSD variants (tested on FreeBSD, NetBSD and OpenBSD). BIRD consists +of bird daemon and birdc interactive CLI client used for supervision.

+

In order to be able to use the BIRD Internet Routing Protocol, you must first +install the project on your machine.

+

BIRD Package Install

+
sudo apt-get install bird
+
+

BIRD Source Code Install

+

You can download BIRD source code from its +official repository.

+
./configure
+make
+su
+make install
+vi /etc/bird/bird.conf
+
+

The installation will place the bird directory inside /etc where you will +also find its config file.

+

In order to be able to do use the Bird Internet Routing Protocol, you must +modify bird.conf due to the fact that the given configuration file is not +configured beyond allowing the bird daemon to start, which means that nothing +else will happen if you run it.

+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/services/emane.html b/services/emane.html new file mode 100644 index 00000000..d2b816db --- /dev/null +++ b/services/emane.html @@ -0,0 +1,1343 @@ + + + + + + + + + + + + + + + + + + + + + + EMANE - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

EMANE Services

+

Overview

+

EMANE related services for CORE.

+

Transport Service

+

Helps with setting up EMANE for using an external transport.

+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/services/frr.html b/services/frr.html new file mode 100644 index 00000000..f7c1489a --- /dev/null +++ b/services/frr.html @@ -0,0 +1,1416 @@ + + + + + + + + + + + + + + + + + + + + + + FRR - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

FRRouting

+

Overview

+

FRRouting is a routing software package that provides TCP/IP based routing services with routing protocols support such +as BGP, RIP, OSPF, IS-IS and more. FRR also supports special BGP Route Reflector and Route Server behavior. In addition +to traditional IPv4 routing protocols, FRR also supports IPv6 routing protocols. With an SNMP daemon that supports the +AgentX protocol, FRR provides routing protocol MIB read-only access (SNMP Support).

+

FRR (as of v7.2) currently supports the following protocols:

+
    +
  • BGPv4
  • +
  • OSPFv2
  • +
  • OSPFv3
  • +
  • RIPv1/v2/ng
  • +
  • IS-IS
  • +
  • PIM-SM/MSDP/BSM(AutoRP)
  • +
  • LDP
  • +
  • BFD
  • +
  • Babel
  • +
  • PBR
  • +
  • OpenFabric
  • +
  • VRRPv2/v3
  • +
  • EIGRP (alpha)
  • +
  • NHRP (alpha)
  • +
+

FRRouting Package Install

+

Ubuntu 19.10 and later

+
sudo apt update && sudo apt install frr
+
+

Ubuntu 16.04 and Ubuntu 18.04

+
sudo apt install curl
+curl -s https://deb.frrouting.org/frr/keys.asc | sudo apt-key add -
+FRRVER="frr-stable"
+echo deb https://deb.frrouting.org/frr $(lsb_release -s -c) $FRRVER | sudo tee -a /etc/apt/sources.list.d/frr.list
+sudo apt update && sudo apt install frr frr-pythontools
+
+

Fedora 31

+
sudo dnf update && sudo dnf install frr
+
+

FRRouting Source Code Install

+

Building FRR from source is the best way to ensure you have the latest features and bug fixes. Details for each +supported platform, including dependency package listings, permissions, and other gotchas, are in the developer’s +documentation.

+

FRR’s source is available on the project GitHub page.

+
git clone https://github.com/FRRouting/frr.git
+
+

Change into your FRR source directory and issue:

+
./bootstrap.sh
+
+

Then, choose the configuration options that you wish to use for the installation. You can find these options on +FRR's official webpage. Once you have chosen your configure +options, run the configure script and pass the options you chose:

+
./configure \
+    --prefix=/usr \
+    --enable-exampledir=/usr/share/doc/frr/examples/ \
+    --localstatedir=/var/run/frr \
+    --sbindir=/usr/lib/frr \
+    --sysconfdir=/etc/frr \
+    --enable-pimd \
+    --enable-watchfrr \
+    ...
+
+

After configuring the software, you are ready to build and install it in your system.

+
make && sudo make install
+
+

If everything finishes successfully, FRR should be installed.

+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/services/nrl.html b/services/nrl.html new file mode 100644 index 00000000..a47ba7fc --- /dev/null +++ b/services/nrl.html @@ -0,0 +1,1454 @@ + + + + + + + + + + + + + + + + + + + + + + NRL - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + +

NRL Services

+

Overview

+

The Protean Protocol Prototyping Library (ProtoLib) is a cross-platform library that allows applications to be built +while supporting a variety of platforms including Linux, Windows, WinCE/PocketPC, MacOS, FreeBSD, Solaris, etc as well +as the simulation environments of NS2 and Opnet. The goal of the Protolib is to provide a set of simple, cross-platform +C++ classes that allow development of network protocols and applications that can run on different platforms and in +network simulation environments. While Protolib provides an overall framework for developing working protocol +implementations, applications, and simulation modules, the individual classes are designed for use as stand-alone +components when possible. Although Protolib is principally for research purposes, the code has been constructed to +provide robust, efficient performance and adaptability to real applications. In some cases, the code consists of data +structures, etc useful in protocol implementations and, in other cases, provides common, cross-platform interfaces to +system services and functions (e.g., sockets, timers, routing tables, etc).

+

Currently, the Naval Research Laboratory uses this library to develop a wide variety of protocols.The NRL Protolib +currently supports the following protocols:

+
    +
  • MGEN_Sink
  • +
  • NHDP
  • +
  • SMF
  • +
  • OLSR
  • +
  • OLSRv2
  • +
  • OLSRORG
  • +
  • MgenActor
  • +
  • arouted
  • +
+

NRL Installation

+

In order to be able to use the different protocols that NRL offers, you must first download the support library itself. +You can get the source code from their NRL Protolib Repo.

+

Multi-Generator (MGEN)

+

Download MGEN from the NRL MGEN Repo, unpack it and copy the +protolib library into the main folder mgen. Execute the following commands to build the protocol.

+
cd mgen/makefiles
+make -f Makefile.{os} mgen
+
+

Neighborhood Discovery Protocol (NHDP)

+

Download NHDP from the NRL NHDP Repo.

+
sudo apt-get install libpcap-dev libboost-all-dev
+wget https://github.com/protocolbuffers/protobuf/releases/download/v3.8.0/protoc-3.8.0-linux-x86_64.zip
+unzip protoc-3.8.0-linux-x86_64.zip
+
+

Then place the binaries in your $PATH. To know your paths you can issue the following command

+
echo $PATH
+
+

Go to the downloaded NHDP tarball, unpack it and place the protolib library inside the NHDP main folder. Now, compile +the NHDP Protocol.

+
cd nhdp/unix
+make -f Makefile.{os}
+
+

Simplified Multicast Forwarding (SMF)

+

Download SMF from the NRL SMF Repo , unpack it and place the +protolib library inside the smf main folder.

+
cd mgen/makefiles
+make -f Makefile.{os}
+
+ +

To install the OLSR protocol, download their source code from +their NRL OLSR Repo. Unpack it and place the previously +downloaded protolib library inside the nrlolsr main directory. Then execute the following commands:

+
cd ./unix
+make -f Makefile.{os}
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/services/quagga.html b/services/quagga.html new file mode 100644 index 00000000..92f83239 --- /dev/null +++ b/services/quagga.html @@ -0,0 +1,1372 @@ + + + + + + + + + + + + + + + + + + + + + + Quagga - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Quagga Routing Suite

+

Overview

+

Quagga is a routing software suite, providing implementations of OSPFv2, OSPFv3, RIP v1 and v2, RIPng and BGP-4 for Unix +platforms, particularly FreeBSD, Linux, Solaris and NetBSD. Quagga is a fork of GNU Zebra which was developed by +Kunihiro Ishiguro. +The Quagga architecture consists of a core daemon, zebra, which acts as an abstraction layer to the underlying Unix +kernel and presents the Zserv API over a Unix or TCP stream to Quagga clients. It is these Zserv clients which typically +implement a routing protocol and communicate routing updates to the zebra daemon.

+

Quagga Package Install

+
sudo apt-get install quagga
+
+

Quagga Source Install

+

First, download the source code from their official webpage.

+
sudo apt-get install gawk
+
+

Extract the tarball, go to the directory of your currently extracted code and issue the following commands.

+
./configure
+make
+sudo make install
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/services/sdn.html b/services/sdn.html new file mode 100644 index 00000000..91dda247 --- /dev/null +++ b/services/sdn.html @@ -0,0 +1,1410 @@ + + + + + + + + + + + + + + + + + + + + + + SDN - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Software Defined Networking

+

Overview

+

Ryu is a component-based software defined networking framework. Ryu provides software components with well defined API +that make it easy for developers to create new network management and control applications. Ryu supports various +protocols for managing network devices, such as OpenFlow, Netconf, OF-config, etc. About OpenFlow, Ryu supports fully +1.0, 1.2, 1.3, 1.4, 1.5 and Nicira Extensions. All of the code is freely available under the Apache 2.0 license.

+

Installation

+

Prerequisites

+
sudo apt-get install gcc python-dev libffi-dev libssl-dev libxml2-dev libxslt1-dev zlib1g-dev
+
+

Ryu Package Install

+
pip install ryu
+
+

Ryu Source Install

+
git clone git://github.com/osrg/ryu.git
+cd ryu
+pip install .
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/services/security.html b/services/security.html new file mode 100644 index 00000000..a82bca42 --- /dev/null +++ b/services/security.html @@ -0,0 +1,1492 @@ + + + + + + + + + + + + + + + + + + + + + + Security - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Security Services

+

Overview

+

The security services offer a wide variety of protocols capable of satisfying the most use cases available. Security +services such as IP security protocols, for providing security at the IP layer, as well as the suite of protocols +designed to provide that security, through authentication and encryption of IP network packets. Virtual Private +Networks (VPNs) and Firewalls are also available for use to the user.

+

Installation

+

Libraries needed for some security services.

+
sudo apt-get install ipsec-tools racoon
+
+

OpenVPN

+

Below is a set of instruction for running a very simple OpenVPN client/server scenario.

+

Installation

+
# install openvpn
+sudo apt install openvpn
+
+# retrieve easyrsa3 for key/cert generation
+git clone https://github.com/OpenVPN/easy-rsa
+
+

Generating Keys/Certs

+
# navigate into easyrsa3 repo subdirectory that contains built binary
+cd easy-rsa/easyrsa3
+
+# initalize pki
+./easyrsa init-pki
+
+# build ca
+./easyrsa build-ca
+
+# generate and sign server keypair(s)
+SERVER_NAME=server1
+./easyrsa get-req $SERVER_NAME nopass
+./easyrsa sign-req server $SERVER_NAME
+
+# generate and sign client keypair(s)
+CLIENT_NAME=client1
+./easyrsa get-req $CLIENT_NAME nopass
+./easyrsa sign-req client $CLIENT_NAME
+
+# DH generation
+./easyrsa gen-dh
+
+# create directory for keys for CORE to use
+# NOTE: the default is set to a directory that requires using sudo, but can be
+# anywhere and not require sudo at all
+KEYDIR=/etc/core/keys
+sudo mkdir $KEYDIR
+
+# move keys to directory
+sudo cp pki/ca.crt $KEYDIR
+sudo cp pki/issued/*.crt $KEYDIR
+sudo cp pki/private/*.key $KEYDIR
+sudo cp pki/dh.pem $KEYDIR/dh1024.pem
+
+

Configure Server Nodes

+

Add VPNServer service to nodes desired for running an OpenVPN server.

+

Modify sampleVPNServer for the +following

+
    +
  • Edit keydir key/cert directory
  • +
  • Edit keyname to use generated server name above
  • +
  • Edit vpnserver to match an address that the server node will have
  • +
+

Configure Client Nodes

+

Add VPNClient service to nodes desired for acting as an OpenVPN client.

+

Modify sampleVPNClient for the +following

+
    +
  • Edit keydir key/cert directory
  • +
  • Edit keyname to use generated client name above
  • +
  • Edit vpnserver to match the address a server was configured to use
  • +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/services/utility.html b/services/utility.html new file mode 100644 index 00000000..ce6c586b --- /dev/null +++ b/services/utility.html @@ -0,0 +1,1410 @@ + + + + + + + + + + + + + + + + + + + + + + Utility - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Utility Services

+

Overview

+

Variety of convenience services for carrying out common networking changes.

+

The following services are provided as utilities:

+
    +
  • UCARP
  • +
  • IP Forward
  • +
  • Default Routing
  • +
  • Default Muticast Routing
  • +
  • Static Routing
  • +
  • SSH
  • +
  • DHCP
  • +
  • DHCP Client
  • +
  • FTP
  • +
  • HTTP
  • +
  • PCAP
  • +
  • RADVD
  • +
  • ATD
  • +
+

Installation

+

To install the functionality of the previously metioned services you can run the following command:

+
sudo apt-get install isc-dhcp-server apache2 libpcap-dev radvd at
+
+

UCARP

+

UCARP allows a couple of hosts to share common virtual IP addresses in order to provide automatic failover. It is a +portable userland implementation of the secure and patent-free Common Address Redundancy Protocol (CARP, OpenBSD's +alternative to the patents-bloated VRRP).

+

Strong points of the CARP protocol are: very low overhead, cryptographically signed messages, interoperability between +different operating systems and no need for any dedicated extra network link between redundant hosts.

+

Installation

+
sudo apt-get install ucarp
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/services/xorp.html b/services/xorp.html new file mode 100644 index 00000000..4c267c1e --- /dev/null +++ b/services/xorp.html @@ -0,0 +1,1376 @@ + + + + + + + + + + + + + + + + + + + + + + XORP - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

XORP routing suite

+

Overview

+

XORP is an open networking platform that supports OSPF, RIP, BGP, OLSR, VRRP, PIM, IGMP (Multicast) and other routing +protocols. Most protocols support IPv4 and IPv6 where applicable. It is known to work on various Linux distributions and +flavors of BSD.

+

XORP started life as a project at the ICSI Center for Open Networking (ICON) at the International Computer Science +Institute in Berkeley, California, USA, and spent some time with the team at XORP, Inc. It is now maintained and +improved on a volunteer basis by a core of long-term XORP developers and some newer contributors.

+

XORP's primary goal is to be an open platform for networking protocol implementations and an alternative to proprietary +and closed networking products in the marketplace today. It is the only open source platform to offer integrated +multicast capability.

+

XORP design philosophy is:

+
    +
  • modularity
  • +
  • extensibility
  • +
  • performance
  • +
  • robustness + This is achieved by carefully separating functionalities into independent modules, and by providing an API for each + module.
  • +
+

XORP divides into two subsystems. The higher-level ("user-level") subsystem consists of the routing protocols. The +lower-level ("kernel") manages the forwarding path, and provides APIs for the higher-level to access.

+

User-level XORP uses multi-process architecture with one process per routing protocol, and a novel inter-process +communication mechanism called XRL (XORP Resource Locator).

+

The lower-level subsystem can use traditional UNIX kernel forwarding, or Click modular router. The modularity and +independency of the lower-level from the user-level subsystem allows for its easily replacement with other solutions +including high-end hardware-based forwarding engines.

+

Installation

+

In order to be able to install the XORP Routing Suite, you must first install scons in order to compile it.

+
sudo apt-get install scons
+
+

Then, download XORP from its official release web page.

+
http://www.xorp.org/releases/current/
+cd xorp
+sudo apt-get install libssl-dev ncurses-dev
+scons
+scons install
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 00000000..0f8724ef --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,3 @@ + + + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz new file mode 100644 index 00000000..e714d97d Binary files /dev/null and b/sitemap.xml.gz differ diff --git a/static/architecture.png b/static/architecture.png new file mode 100644 index 00000000..f4ce3388 Binary files /dev/null and b/static/architecture.png differ diff --git a/static/controlnetwork.png b/static/controlnetwork.png new file mode 100644 index 00000000..c690feba Binary files /dev/null and b/static/controlnetwork.png differ diff --git a/static/core-gui.png b/static/core-gui.png new file mode 100644 index 00000000..6d0fbd40 Binary files /dev/null and b/static/core-gui.png differ diff --git a/static/emane-configuration.png b/static/emane-configuration.png new file mode 100644 index 00000000..ad66a6f3 Binary files /dev/null and b/static/emane-configuration.png differ diff --git a/static/emane-single-pc.png b/static/emane-single-pc.png new file mode 100644 index 00000000..8c58d825 Binary files /dev/null and b/static/emane-single-pc.png differ diff --git a/static/gui/host.png b/static/gui/host.png new file mode 100644 index 00000000..e6efda08 Binary files /dev/null and b/static/gui/host.png differ diff --git a/static/gui/hub.png b/static/gui/hub.png new file mode 100644 index 00000000..c9a2523b Binary files /dev/null and b/static/gui/hub.png differ diff --git a/static/gui/lanswitch.png b/static/gui/lanswitch.png new file mode 100644 index 00000000..eb9ba593 Binary files /dev/null and b/static/gui/lanswitch.png differ diff --git a/static/gui/link.png b/static/gui/link.png new file mode 100644 index 00000000..d6b6745b Binary files /dev/null and b/static/gui/link.png differ diff --git a/static/gui/marker.png b/static/gui/marker.png new file mode 100644 index 00000000..8c60bacb Binary files /dev/null and b/static/gui/marker.png differ diff --git a/static/gui/mdr.png b/static/gui/mdr.png new file mode 100644 index 00000000..b0678ee7 Binary files /dev/null and b/static/gui/mdr.png differ diff --git a/static/gui/oval.png b/static/gui/oval.png new file mode 100644 index 00000000..1babf1b7 Binary files /dev/null and b/static/gui/oval.png differ diff --git a/static/gui/pc.png b/static/gui/pc.png new file mode 100644 index 00000000..3f587e70 Binary files /dev/null and b/static/gui/pc.png differ diff --git a/static/gui/rectangle.png b/static/gui/rectangle.png new file mode 100644 index 00000000..ca6c8c06 Binary files /dev/null and b/static/gui/rectangle.png differ diff --git a/static/gui/rj45.png b/static/gui/rj45.png new file mode 100644 index 00000000..c9d87cfd Binary files /dev/null and b/static/gui/rj45.png differ diff --git a/static/gui/router.png b/static/gui/router.png new file mode 100644 index 00000000..1de5014a Binary files /dev/null and b/static/gui/router.png differ diff --git a/static/gui/run.png b/static/gui/run.png new file mode 100644 index 00000000..a39a997f Binary files /dev/null and b/static/gui/run.png differ diff --git a/static/gui/select.png b/static/gui/select.png new file mode 100644 index 00000000..04e18891 Binary files /dev/null and b/static/gui/select.png differ diff --git a/static/gui/start.png b/static/gui/start.png new file mode 100644 index 00000000..719f4cd9 Binary files /dev/null and b/static/gui/start.png differ diff --git a/static/gui/stop.png b/static/gui/stop.png new file mode 100644 index 00000000..1e87c929 Binary files /dev/null and b/static/gui/stop.png differ diff --git a/static/gui/text.png b/static/gui/text.png new file mode 100644 index 00000000..14a85dc0 Binary files /dev/null and b/static/gui/text.png differ diff --git a/static/gui/tunnel.png b/static/gui/tunnel.png new file mode 100644 index 00000000..2871b74f Binary files /dev/null and b/static/gui/tunnel.png differ diff --git a/static/gui/wlan.png b/static/gui/wlan.png new file mode 100644 index 00000000..db979a09 Binary files /dev/null and b/static/gui/wlan.png differ diff --git a/static/tutorial-common/running-join.png b/static/tutorial-common/running-join.png new file mode 100644 index 00000000..30fbcb80 Binary files /dev/null and b/static/tutorial-common/running-join.png differ diff --git a/static/tutorial-common/running-open.png b/static/tutorial-common/running-open.png new file mode 100644 index 00000000..7e3e722c Binary files /dev/null and b/static/tutorial-common/running-open.png differ diff --git a/static/tutorial1/link-config-dialog.png b/static/tutorial1/link-config-dialog.png new file mode 100644 index 00000000..73d4ed2d Binary files /dev/null and b/static/tutorial1/link-config-dialog.png differ diff --git a/static/tutorial1/link-config.png b/static/tutorial1/link-config.png new file mode 100644 index 00000000..35f45327 Binary files /dev/null and b/static/tutorial1/link-config.png differ diff --git a/static/tutorial1/scenario.png b/static/tutorial1/scenario.png new file mode 100644 index 00000000..c1a2dfc7 Binary files /dev/null and b/static/tutorial1/scenario.png differ diff --git a/static/tutorial2/wireless-config-delay.png b/static/tutorial2/wireless-config-delay.png new file mode 100644 index 00000000..b375af76 Binary files /dev/null and b/static/tutorial2/wireless-config-delay.png differ diff --git a/static/tutorial2/wireless-configuration.png b/static/tutorial2/wireless-configuration.png new file mode 100644 index 00000000..9b87959c Binary files /dev/null and b/static/tutorial2/wireless-configuration.png differ diff --git a/static/tutorial2/wireless.png b/static/tutorial2/wireless.png new file mode 100644 index 00000000..8543117d Binary files /dev/null and b/static/tutorial2/wireless.png differ diff --git a/static/tutorial3/mobility-script.png b/static/tutorial3/mobility-script.png new file mode 100644 index 00000000..6f32e5b1 Binary files /dev/null and b/static/tutorial3/mobility-script.png differ diff --git a/static/tutorial3/motion_continued_breaks_link.png b/static/tutorial3/motion_continued_breaks_link.png new file mode 100644 index 00000000..cc1f5dcd Binary files /dev/null and b/static/tutorial3/motion_continued_breaks_link.png differ diff --git a/static/tutorial3/motion_from_ns2_file.png b/static/tutorial3/motion_from_ns2_file.png new file mode 100644 index 00000000..704cc1d9 Binary files /dev/null and b/static/tutorial3/motion_from_ns2_file.png differ diff --git a/static/tutorial3/move-n2.png b/static/tutorial3/move-n2.png new file mode 100644 index 00000000..befcd4b0 Binary files /dev/null and b/static/tutorial3/move-n2.png differ diff --git a/static/tutorial5/VM-network-settings.png b/static/tutorial5/VM-network-settings.png new file mode 100644 index 00000000..5d47738e Binary files /dev/null and b/static/tutorial5/VM-network-settings.png differ diff --git a/static/tutorial5/configure-the-rj45.png b/static/tutorial5/configure-the-rj45.png new file mode 100644 index 00000000..0e2b8f8b Binary files /dev/null and b/static/tutorial5/configure-the-rj45.png differ diff --git a/static/tutorial5/rj45-connector.png b/static/tutorial5/rj45-connector.png new file mode 100644 index 00000000..8c8e86ef Binary files /dev/null and b/static/tutorial5/rj45-connector.png differ diff --git a/static/tutorial5/rj45-unassigned.png b/static/tutorial5/rj45-unassigned.png new file mode 100644 index 00000000..eda4a3b6 Binary files /dev/null and b/static/tutorial5/rj45-unassigned.png differ diff --git a/static/tutorial6/configure-icon.png b/static/tutorial6/configure-icon.png new file mode 100644 index 00000000..52a9e2e8 Binary files /dev/null and b/static/tutorial6/configure-icon.png differ diff --git a/static/tutorial6/create-nodes.png b/static/tutorial6/create-nodes.png new file mode 100644 index 00000000..38257e24 Binary files /dev/null and b/static/tutorial6/create-nodes.png differ diff --git a/static/tutorial6/hidden-nodes.png b/static/tutorial6/hidden-nodes.png new file mode 100644 index 00000000..604829dd Binary files /dev/null and b/static/tutorial6/hidden-nodes.png differ diff --git a/static/tutorial6/linked-nodes.png b/static/tutorial6/linked-nodes.png new file mode 100644 index 00000000..8e75007e Binary files /dev/null and b/static/tutorial6/linked-nodes.png differ diff --git a/static/tutorial6/only-node1-moving.png b/static/tutorial6/only-node1-moving.png new file mode 100644 index 00000000..01ac2ebd Binary files /dev/null and b/static/tutorial6/only-node1-moving.png differ diff --git a/static/tutorial6/scenario-with-motion.png b/static/tutorial6/scenario-with-motion.png new file mode 100644 index 00000000..e30e781c Binary files /dev/null and b/static/tutorial6/scenario-with-motion.png differ diff --git a/static/tutorial6/scenario-with-terrain.png b/static/tutorial6/scenario-with-terrain.png new file mode 100644 index 00000000..db424e9b Binary files /dev/null and b/static/tutorial6/scenario-with-terrain.png differ diff --git a/static/tutorial6/select-wallpaper.png b/static/tutorial6/select-wallpaper.png new file mode 100644 index 00000000..41d40f57 Binary files /dev/null and b/static/tutorial6/select-wallpaper.png differ diff --git a/static/tutorial6/wlan-links.png b/static/tutorial6/wlan-links.png new file mode 100644 index 00000000..ab6c152d Binary files /dev/null and b/static/tutorial6/wlan-links.png differ diff --git a/static/tutorial7/scenario.png b/static/tutorial7/scenario.png new file mode 100644 index 00000000..1c677aa3 Binary files /dev/null and b/static/tutorial7/scenario.png differ diff --git a/static/workflow.png b/static/workflow.png new file mode 100644 index 00000000..35613983 Binary files /dev/null and b/static/workflow.png differ diff --git a/tutorials/common/grpc.html b/tutorials/common/grpc.html new file mode 100644 index 00000000..bdb51346 --- /dev/null +++ b/tutorials/common/grpc.html @@ -0,0 +1,1258 @@ + + + + + + + + + + + + + + + + + + Grpc - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Grpc

+ +

gRPC Python Scripts

+

You can also run the same steps above, using the provided gRPC script versions of scenarios. +Below are the steps to run and join one of these scenario, then you can continue with +the remaining steps of a given section.

+
    +
  1. Make sure the CORE daemon is running a terminal, if not already +
    sudop core-daemon
    +
  2. +
  3. From another terminal run the tutorial python script, which will create a session to join +
    /opt/core/venv/bin/python scenario.py
    +
  4. +
  5. In another terminal run the CORE GUI +
    core-gui
    +
  6. +
  7. You will be presented with sessions to join, select the one created by the script +
    +

    + +

    +
  8. +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/tutorials/overview.html b/tutorials/overview.html new file mode 100644 index 00000000..9956ad0e --- /dev/null +++ b/tutorials/overview.html @@ -0,0 +1,1375 @@ + + + + + + + + + + + + + + + + + + + + + + Overview - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

CORE Tutorials

+

These tutorials will cover various use cases within CORE. These +tutorials will provide example python, gRPC, XML, and related files, as well +as an explanation for their usage and purpose.

+

Checklist

+

These are the items you should become familiar with for running all the tutorials below.

+ +

Tutorials

+ + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/tutorials/setup.html b/tutorials/setup.html new file mode 100644 index 00000000..23c3de82 --- /dev/null +++ b/tutorials/setup.html @@ -0,0 +1,1462 @@ + + + + + + + + + + + + + + + + + + + + + + Setup - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Tutorial Setup

+

Setup for CORE

+

We assume the prior installation of CORE, using a virtual environment. You can +then adjust your PATH and add an alias to help more conveniently run CORE +commands.

+

This can be setup in your .bashrc

+
export PATH=$PATH:/opt/core/venv/bin
+alias sudop='sudo env PATH=$PATH'
+
+

Setup for Chat App

+

There is a simple TCP chat app provided as example software to use and run within +the tutorials provided.

+

Installation

+

The following will install chatapp and its scripts into /usr/local, which you +may need to add to PATH within node to be able to use command directly.

+
sudo python3 -m pip install .
+
+
+

Note

+

Some Linux distros will not have /usr/local in their PATH and you +will need to compensate.

+
+
export PATH=$PATH:/usr/local
+
+

Running the Server

+

The server will print and log connected clients and their messages.

+
usage: chatapp-server [-h] [-a ADDRESS] [-p PORT]
+
+chat app server
+
+optional arguments:
+  -h, --help            show this help message and exit
+  -a ADDRESS, --address ADDRESS
+                        address to listen on (default: )
+  -p PORT, --port PORT  port to listen on (default: 9001)
+
+

Running the Client

+

The client will print and log messages from other clients and their join/leave status.

+
usage: chatapp-client [-h] -a ADDRESS [-p PORT]
+
+chat app client
+
+optional arguments:
+  -h, --help            show this help message and exit
+  -a ADDRESS, --address ADDRESS
+                        address to listen on (default: None)
+  -p PORT, --port PORT  port to listen on (default: 9001)
+
+

Installing the Chat App Service

+
    +
  1. You will first need to edit /etc/core/core.conf to update the config + service path to pick up your service +
    custom_config_services_dir = <path for service>
    +
  2. +
  3. Then you will need to copy/move chatapp/chatapp_service.py to the directory + configured above
  4. +
  5. Then you will need to restart the core-daemon to pick up this new service
  6. +
  7. Now the service will be an available option under the group ChatApp with + the name ChatApp Server
  8. +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/tutorials/tutorial1.html b/tutorials/tutorial1.html new file mode 100644 index 00000000..196d267a --- /dev/null +++ b/tutorials/tutorial1.html @@ -0,0 +1,1729 @@ + + + + + + + + + + + + + + + + + + + + + + Tutorial 1 - Wired Network - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Tutorial 1 - Wired Network

+

Overview

+

This tutorial will cover some use cases when using a wired 2 node +scenario in CORE.

+

+ +

+ +

Files

+

Below is the list of files used for this tutorial.

+
    +
  • 2 node wired scenario
      +
    • scenario.xml
    • +
    • scenario.py
    • +
    +
  • +
  • 2 node wired scenario, with n1 running the "Chat App Server" service
      +
    • scenario_service.xml
    • +
    • scenario_service.py
    • +
    +
  • +
+

Running this Tutorial

+

This section covers interactions that can be carried out for this scenario.

+

Our scenario has the following nodes and addresses:

+
    +
  • n1 - 10.0.0.20
  • +
  • n2 - 10.0.0.21
  • +
+

All usages below assume a clean scenario start.

+

Using Ping

+

Using the command line utility ping can be a good way to verify connectivity +between nodes in CORE.

+
    +
  • Make sure the CORE daemon is running a terminal, if not already +
    sudop core-daemon
    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
  • +

    In the GUI menu bar select File->Open..., then navigate to and select scenario.xml +
    +

    + +

    +
  • +
  • +

    You can now click on the Start Session button to run the scenario +
    +

    + +

    +
  • +
  • +

    Open a terminal on n1 by double clicking it in the GUI

    +
  • +
  • Run the following in n1 terminal +
    ping -c 3 10.0.0.21
    +
  • +
  • You should see the following output +
    PING 10.0.0.21 (10.0.0.21) 56(84) bytes of data.
    +64 bytes from 10.0.0.21: icmp_seq=1 ttl=64 time=0.085 ms
    +64 bytes from 10.0.0.21: icmp_seq=2 ttl=64 time=0.079 ms
    +64 bytes from 10.0.0.21: icmp_seq=3 ttl=64 time=0.072 ms
    +
    +--- 10.0.0.21 ping statistics ---
    +3 packets transmitted, 3 received, 0% packet loss, time 1999ms
    +rtt min/avg/max/mdev = 0.072/0.078/0.085/0.011 ms
    +
  • +
+

Using Tcpdump

+

Using tcpdump can be very beneficial for examining a network. You can verify +traffic being sent/received among many other uses.

+
    +
  • Make sure the CORE daemon is running a terminal, if not already +
    sudop core-daemon
    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
  • +

    In the GUI menu bar select File->Open..., then navigate to and select scenario.xml +
    +

    + +

    +
  • +
  • +

    You can now click on the Start Session button to run the scenario +
    +

    + +

    +
  • +
  • +

    Open a terminal on n1 by double clicking it in the GUI

    +
  • +
  • Open a terminal on n2 by double clicking it in the GUI
  • +
  • Run the following in n2 terminal +
    tcpdump -lenni eth0
    +
  • +
  • Run the following in n1 terminal +
    ping -c 1 10.0.0.21
    +
  • +
  • You should see the following in n2 terminal +
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    +listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
    +10:23:04.685292 00:00:00:aa:00:00 > 00:00:00:aa:00:01, ethertype IPv4 (0x0800), length 98: 10.0.0.20 > 10.0.0.21: ICMP echo request, id 67, seq 1, length 64
    +10:23:04.685329 00:00:00:aa:00:01 > 00:00:00:aa:00:00, ethertype IPv4 (0x0800), length 98: 10.0.0.21 > 10.0.0.20: ICMP echo reply, id 67, seq 1, length 64
    +
  • +
+ +

You can edit links between nodes in CORE to modify loss, delay, bandwidth, and more. This can be +beneficial for understanding how software will behave in adverse conditions.

+
    +
  • Make sure the CORE daemon is running a terminal, if not already +
    sudop core-daemon
    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
  • +

    In the GUI menu bar select File->Open..., then navigate to and select scenario.xml +
    +

    + +

    +
  • +
  • +

    You can now click on the Start Session button to run the scenario +
    +

    + +

    +
  • +
  • +

    Right click the link between n1 and n2

    +
  • +
  • +

    Select Configure +
    +

    + +

    +
  • +
  • +

    Update the loss to 25 +
    +

    + +

    +
  • +
  • +

    Open a terminal on n1 by double clicking it in the GUI

    +
  • +
  • Run the following in n1 terminal +
    ping -c 10 10.0.0.21
    +
  • +
  • You should see something similar for the summary output, reflecting the change in loss +
    --- 10.0.0.21 ping statistics ---
    +10 packets transmitted, 6 received, 40% packet loss, time 9000ms
    +rtt min/avg/max/mdev = 0.077/0.093/0.108/0.016 ms
    +
  • +
  • Remember that the loss above is compounded, since a ping and the loss applied occurs in both directions
  • +
+

Running Software

+

We will now leverage the installed Chat App software to stand up a server and client +within the nodes of our scenario.

+
    +
  • Make sure the CORE daemon is running a terminal, if not already +
    sudop core-daemon
    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
  • +

    In the GUI menu bar select File->Open..., then navigate to and select scenario.xml +
    +

    + +

    +
  • +
  • +

    You can now click on the Start Session button to run the scenario +
    +

    + +

    +
  • +
  • +

    Open a terminal on n1 by double clicking it in the GUI

    +
  • +
  • Run the following in n1 terminal +
    export PATH=$PATH:/usr/local/bin
    +chatapp-server
    +
  • +
  • Open a terminal on n2 by double clicking it in the GUI
  • +
  • Run the following in n2 terminal +
    export PATH=$PATH:/usr/local/bin
    +chatapp-client -a 10.0.0.20
    +
  • +
  • You will see the following output in n1 terminal +
    chat server listening on: :9001
    +[server] 10.0.0.21:44362 joining
    +
  • +
  • Type the following in n2 terminal and hit enter +
    hello world
    +
  • +
  • You will see the following output in n1 terminal +
    chat server listening on: :9001
    +[server] 10.0.0.21:44362 joining
    +[10.0.0.21:44362] hello world
    +
  • +
+

Tailing a Log

+

In this case we are using the service based scenario. This will automatically start +and run the Chat App Server on n1 and log to a file. This case will demonstrate +using tail -f to observe the output of running software.

+
    +
  • Make sure the CORE daemon is running a terminal, if not already +
    sudop core-daemon
    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
  • +

    In the GUI menu bar select File->Open..., then navigate to and select scenario_service.xml +
    +

    + +

    +
  • +
  • +

    You can now click on the Start Session button to run the scenario +
    +

    + +

    +
  • +
  • +

    Open a terminal on n1 by double clicking it in the GUI

    +
  • +
  • Run the following in n1 terminal +
    tail -f chatapp.log
    +
  • +
  • Open a terminal on n2 by double clicking it in the GUI
  • +
  • Run the following in n2 terminal +
    export PATH=$PATH:/usr/local/bin
    +chatapp-client -a 10.0.0.20
    +
  • +
  • You will see the following output in n1 terminal +
    chat server listening on: :9001
    +[server] 10.0.0.21:44362 joining
    +
  • +
  • Type the following in n2 terminal and hit enter +
    hello world
    +
  • +
  • You will see the following output in n1 terminal +
    chat server listening on: :9001
    +[server] 10.0.0.21:44362 joining
    +[10.0.0.21:44362] hello world
    +
  • +
+

gRPC Python Scripts

+

You can also run the same steps above, using the provided gRPC script versions of scenarios. +Below are the steps to run and join one of these scenario, then you can continue with +the remaining steps of a given section.

+
    +
  1. Make sure the CORE daemon is running a terminal, if not already +
    sudop core-daemon
    +
  2. +
  3. From another terminal run the tutorial python script, which will create a session to join +
    /opt/core/venv/bin/python scenario.py
    +
  4. +
  5. In another terminal run the CORE GUI +
    core-gui
    +
  6. +
  7. You will be presented with sessions to join, select the one created by the script +
    +

    + +

    +
  8. +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/tutorials/tutorial2.html b/tutorials/tutorial2.html new file mode 100644 index 00000000..cbad04b4 --- /dev/null +++ b/tutorials/tutorial2.html @@ -0,0 +1,1558 @@ + + + + + + + + + + + + + + + + + + + + + + Tutorial 2 - Wireless Network - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Tutorial 2 - Wireless Network

+

Overview

+

This tutorial will cover the use of a 3 node scenario in CORE. Then +running a chat server on one node and a chat client on the other. The client will +send a simple message and the server will log receipt of the message.

+

Files

+

Below is the list of files used for this tutorial.

+
    +
  • scenario.xml - 3 node CORE xml scenario file (wireless)
  • +
  • scenario.py - 3 node CORE gRPC python script (wireless)
  • +
+

Running with the XML Scenario File

+

This section will cover running this sample tutorial using the +XML scenario file, leveraging an NS2 mobility file.

+
    +
  • Make sure the core-daemon is running a terminal +
    sudop core-daemon
    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
  • In the GUI menu bar select File->Open...
  • +
  • Navigate to and select this tutorials scenario.xml file
  • +
  • +

    You can now click play to start the session +
    +

    + +

    +
  • +
  • +

    Note that OSPF routing protocol is included in the scenario to provide routes to other nodes, as they are discovered

    +
  • +
  • Double click node n4 to open a terminal and ping node n2 +
    ping  -c 2 10.0.0.2
    +PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
    +64 bytes from 10.0.0.2: icmp_seq=1 ttl=63 time=20.2 ms
    +64 bytes from 10.0.0.2: icmp_seq=2 ttl=63 time=20.2 ms
    +
    +--- 10.0.0.2 ping statistics ---
    +2 packets transmitted, 2 received, 0% packet loss, time 1000ms
    +rtt min/avg/max/mdev = 20.168/20.173/20.178/0.005 ms
    +
  • +
+

Configuring Delay

+
    +
  • +

    Right click on the wlan1 node and select WLAN Config, then set delay to 500000 +
    +

    + +

    +
  • +
  • +

    Using the open terminal for node n4, ping n2 again, expect about 2 seconds delay +

    ping -c 5 10.0.0.2
    +64 bytes from 10.0.0.2: icmp_seq=1 ttl=63 time=2001 ms
    +64 bytes from 10.0.0.2: icmp_seq=2 ttl=63 time=2000 ms
    +64 bytes from 10.0.0.2: icmp_seq=3 ttl=63 time=2000 ms
    +64 bytes from 10.0.0.2: icmp_seq=4 ttl=63 time=2000 ms
    +64 bytes from 10.0.0.2: icmp_seq=5 ttl=63 time=2000 ms
    +
    +--- 10.0.0.2 ping statistics ---
    +5 packets transmitted, 5 received, 0% packet loss, time 4024ms
    +rtt min/avg/max/mdev = 2000.176/2000.438/2001.166/0.376 ms, pipe 2
    +

    +
  • +
+

Configure Loss

+
    +
  • +

    Right click on the wlan1 node and select WLAN Config, set delay back to 5000 and loss to 10 +
    +

    + +

    +
  • +
  • +

    Using the open terminal for node n4, ping n2 again, expect to notice considerable loss +

    ping  -c 10 10.0.0.2
    +PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
    +64 bytes from 10.0.0.2: icmp_seq=1 ttl=63 time=20.4 ms
    +64 bytes from 10.0.0.2: icmp_seq=2 ttl=63 time=20.5 ms
    +64 bytes from 10.0.0.2: icmp_seq=3 ttl=63 time=20.2 ms
    +64 bytes from 10.0.0.2: icmp_seq=4 ttl=63 time=20.8 ms
    +64 bytes from 10.0.0.2: icmp_seq=5 ttl=63 time=21.9 ms
    +64 bytes from 10.0.0.2: icmp_seq=8 ttl=63 time=22.7 ms
    +64 bytes from 10.0.0.2: icmp_seq=9 ttl=63 time=22.4 ms
    +64 bytes from 10.0.0.2: icmp_seq=10 ttl=63 time=20.3 ms
    +
    +--- 10.0.0.2 ping statistics ---
    +10 packets transmitted, 8 received, 20% packet loss, time 9064ms
    +rtt min/avg/max/mdev = 20.188/21.143/22.717/0.967 ms
    +

    +
  • +
  • Make sure to set loss back to 0 when done
  • +
+

Running with the gRPC Python Script

+

This section will cover running this sample tutorial using the +gRPC python script and providing mobility over the gRPC interface.

+
    +
  • Make sure the core-daemon is running a terminal +
    sudop core-daemon
    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
  • From another terminal run the scenario.py script +
    /opt/core/venv/bin/python scenario.py
    +
  • +
  • In the GUI dialog box select the session and click connect
  • +
  • You will now have joined the already running scenario
  • +
+

+ +

+ +

Running Software

+

We will now leverage the installed Chat App software to stand up a server and client +within the nodes of our scenario. You can use the bases of the running scenario from +either scenario.xml or the scenario.py gRPC script.

+
    +
  • In the GUI double click on node n4, this will bring up a terminal for this node
  • +
  • In the n4 terminal, run the server +
    export PATH=$PATH:/usr/local/bin
    +chatapp-server
    +
  • +
  • In the GUI double click on node n2, this will bring up a terminal for this node
  • +
  • In the n2 terminal, run the client +
    export PATH=$PATH:/usr/local/bin
    +chatapp-client -a 10.0.0.4
    +
  • +
  • This will result in n2 connecting to the server
  • +
  • In the n2 terminal, type a message at the client prompt +
    >>hello world
    +
  • +
  • Observe that text typed at client then appears in the terminal of n4 +
    chat server listening on: :9001
    +[server] 10.0.0.2:53684 joining
    +[10.0.0.2:53684] hello world
    +
  • +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/tutorials/tutorial3.html b/tutorials/tutorial3.html new file mode 100644 index 00000000..594266c6 --- /dev/null +++ b/tutorials/tutorial3.html @@ -0,0 +1,1532 @@ + + + + + + + + + + + + + + + + + + + + + + Tutorial 3 - Basic Mobility - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+ +
+ + + +
+
+ + + + +

Tutorial 3 - Basic Mobility

+

Overview

+

This tutorial will cover using a 3 node scenario in CORE with basic mobility. +Mobility can be provided from a NS2 file or by including mobility commands in a gRPC script.

+

Files

+

Below is the list of files used for this tutorial.

+
    +
  • movements1.txt - a NS2 mobility input file
  • +
  • scenario.xml - 3 node CORE xml scenario file (wireless)
  • +
  • scenario.py - 3 node CORE gRPC python script (wireless)
  • +
  • printout.py - event listener
  • +
+

Running with XML file using NS2 Movement

+

This section will cover running this sample tutorial using the XML scenario +file, leveraging an NS2 file for mobility.

+
    +
  • Make sure the core-daemon is running a terminal +
    sudop core-daemon
    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
  • Observe the format of the N2 file, cat movements1.txt. Note that this file was manually developed. +
    $node_(1) set X_ 208.1
    +$node_(1) set Y_ 211.05
    +$node_(1) set Z_ 0
    +$ns_ at 0.0 "$node_(1) setdest 208.1 211.05 0.00"
    +$node_(2) set X_ 393.1
    +$node_(2) set Y_ 223.05
    +$node_(2) set Z_ 0
    +$ns_ at 0.0 "$node_(2) setdest 393.1 223.05 0.00"
    +$node_(4) set X_ 499.1
    +$node_(4) set Y_ 186.05
    +$node_(4) set Z_ 0
    +$ns_ at 0.0 "$node_(4) setdest 499.1 186.05 0.00"
    +$ns_ at 1.0 "$node_(1) setdest 190.1 225.05 0.00"
    +$ns_ at 1.0 "$node_(2) setdest 393.1 225.05 0.00"
    +$ns_ at 1.0 "$node_(4) setdest 515.1 186.05 0.00"
    +$ns_ at 2.0 "$node_(1) setdest 175.1 250.05 0.00"
    +$ns_ at 2.0 "$node_(2) setdest 393.1 250.05 0.00"
    +$ns_ at 2.0 "$node_(4) setdest 530.1 186.05 0.00"
    +$ns_ at 3.0 "$node_(1) setdest 160.1 275.05 0.00"
    +$ns_ at 3.0 "$node_(2) setdest 393.1 275.05 0.00"
    +$ns_ at 3.0 "$node_(4) setdest 530.1 186.05 0.00"
    +$ns_ at 4.0 "$node_(1) setdest 160.1 300.05 0.00"
    +$ns_ at 4.0 "$node_(2) setdest 393.1 300.05 0.00"
    +$ns_ at 4.0 "$node_(4) setdest 550.1 186.05 0.00"
    +$ns_ at 5.0 "$node_(1) setdest 160.1 275.05 0.00"
    +$ns_ at 5.0 "$node_(2) setdest 393.1 275.05 0.00"
    +$ns_ at 5.0 "$node_(4) setdest 530.1 186.05 0.00"
    +$ns_ at 6.0 "$node_(1) setdest 175.1 250.05 0.00"
    +$ns_ at 6.0 "$node_(2) setdest 393.1 250.05 0.00"
    +$ns_ at 6.0 "$node_(4) setdest 515.1 186.05 0.00"
    +$ns_ at 7.0 "$node_(1) setdest 190.1 225.05 0.00"
    +$ns_ at 7.0 "$node_(2) setdest 393.1 225.05 0.00"
    +$ns_ at 7.0 "$node_(4) setdest 499.1 186.05 0.00"
    +
  • +
  • In the GUI menu bar select File->Open..., and select this tutorials scenario.xml file
  • +
  • You can now click play to start the session
  • +
  • Select the play button on the Mobility Player to start mobility
  • +
  • Observe movement of the nodes
  • +
  • Note that OSPF routing protocol is included in the scenario to build routing table so that routes to other nodes are + known and when the routes are discovered, ping will work
  • +
+

+ +

+ +

Running with the gRPC Script

+

This section covers using a gRPC script to create and provide scenario movement.

+
    +
  • Make sure the core-daemon is running a terminal +
    sudop core-daemon
    +
  • +
  • From another terminal run the scenario.py script +
    /opt/core/venv/bin/python scenario.py
    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
  • In the GUI dialog box select the session and click connect
  • +
  • You will now have joined the already running scenario
  • +
  • +

    In the terminal running the scenario.py, hit a key to start motion +
    +

    + +

    +
  • +
  • +

    Observe the link between n3 and n4 is shown and then as motion continues the link breaks +
    +

    + +

    +

    +
  • +
+

Running the Chat App Software

+

This section covers using one of the above 2 scenarios to run software within +the nodes.

+
    +
  • In the GUI double click on n4, this will bring up a terminal for this node
  • +
  • in the n4 terminal, run the server +
    export PATH=$PATH:/usr/local/bin
    +chatapp-server
    +
  • +
  • In the GUI double click on n2, this will bring up a terminal for this node
  • +
  • In the n2 terminal, run the client +
    export PATH=$PATH:/usr/local/bin
    +chatapp-client -a 10.0.0.4
    +
  • +
  • This will result in n2 connecting to the server
  • +
  • In the n2 terminal, type a message at the client prompt and hit enter +
    >>hello world
    +
  • +
  • Observe that text typed at client then appears in the server terminal +
    chat server listening on: :9001
    +[server] 10.0.0.2:53684 joining
    +[10.0.0.2:53684] hello world
    +
  • +
+

Running Mobility from a Node

+

This section provides an example for running a script within a node, that +leverages a control network in CORE for issuing mobility using the gRPC +API.

+
    +
  • Edit the following line in /etc/core/core.conf +
    grpcaddress = 0.0.0.0
    +
  • +
  • Start the scenario from the scenario.xml
  • +
  • From the GUI open Session -> Options and set Control Network to 172.16.0.0/24
  • +
  • Click to play the scenario
  • +
  • Double click on n2 to get a terminal window
  • +
  • From the terminal window for n2, run the script +
    /opt/core/venv/bin/python move-node2.py
    +
  • +
  • Observe that node 2 moves and continues to move
  • +
+

+ +

+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/tutorials/tutorial4.html b/tutorials/tutorial4.html new file mode 100644 index 00000000..658d73e4 --- /dev/null +++ b/tutorials/tutorial4.html @@ -0,0 +1,1482 @@ + + + + + + + + + + + + + + + + + + + + + + Tutorial 4 - Tests - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Tutorial 4 - Tests

+

Overview

+

A use case for CORE would be to help automate integration tests for running +software within a network. This tutorial covers using CORE with the python +pytest testing framework. It will show how you can define tests, for different +use cases to validate software and outcomes within a defined network. Using +pytest, you would create tests using all the standard pytest functionality. +Creating a test file, and then defining test functions to run. For these tests, +we are leveraging the CORE library directly and the API it provides.

+

Refer to the pytest documentation for indepth +information on how to write tests with pytest.

+

Files

+

A directory is used for containing your tests. Within this directory we need a +conftest.py, which pytest will pick up to help define and provide +test fixtures, which will be leveraged within our tests.

+
    +
  • tests
      +
    • conftest.py - file used by pytest to define fixtures, which can be shared across tests
    • +
    • test_ping.py - defines test classes/functions to run
    • +
    +
  • +
+

Test Fixtures

+

Below are the definitions for fixture you can define to facilitate and make +creating CORE based tests easier.

+

The global session fixture creates one CoreEmu object for the entire +test session, yields it for testing, and calls shutdown when everything +is over.

+
@pytest.fixture(scope="session")
+def global_session():
+    core = CoreEmu()
+    session = core.create_session()
+    session.set_state(EventTypes.CONFIGURATION_STATE)
+    yield session
+    core.shutdown()
+
+

The regular session fixture leverages the global session fixture. It +will set the correct state for each test case, yield the session for a test, +and then clear the session after a test finishes to prepare for the next +test.

+
@pytest.fixture
+def session(global_session):
+    global_session.set_state(EventTypes.CONFIGURATION_STATE)
+    yield global_session
+    global_session.clear()
+
+

The ip prefixes fixture help provide a preconfigured convenience for +creating and assigning interfaces to nodes, when creating your network +within a test. The address subnet can be whatever you desire.

+
@pytest.fixture(scope="session")
+def ip_prefixes():
+    return IpPrefixes(ip4_prefix="10.0.0.0/24")
+
+

Test Functions

+

Within a pytest test file, you have the freedom to create any kind of +test you like, but they will all follow a similar formula.

+
    +
  • define a test function that will leverage the session and ip prefixes fixtures
  • +
  • then create a network to test, using the session fixture
  • +
  • run commands within nodes as desired, to test out your use case
  • +
  • validate command result or output for expected behavior to pass or fail
  • +
+

In the test below, we create a simple 2 node wired network and validate +node1 can ping node2 successfully.

+
def test_success(self, session: Session, ip_prefixes: IpPrefixes):
+    # create nodes
+    node1 = session.add_node(CoreNode)
+    node2 = session.add_node(CoreNode)
+
+    # link nodes together
+    iface1_data = ip_prefixes.create_iface(node1)
+    iface2_data = ip_prefixes.create_iface(node2)
+    session.add_link(node1.id, node2.id, iface1_data, iface2_data)
+
+    # ping node, expect a successful command
+    node1.cmd(f"ping -c 1 {iface2_data.ip4}")
+
+

Install Pytest

+

Since we are running an automated test within CORE, we will need to install +pytest within the python interpreter used by CORE.

+
sudo /opt/core/venv/bin/python -m pip install pytest
+
+

Running Tests

+

You can run your own or the provided tests, by running the following.

+
cd <test directory>
+sudo /opt/core/venv/bin/python -m pytest -v
+
+

If you run the provided tests, you would expect to see the two tests +running and passing.

+
tests/test_ping.py::TestPing::test_success PASSED                                [ 50%]
+tests/test_ping.py::TestPing::test_failure PASSED                                [100%]
+
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/tutorials/tutorial5.html b/tutorials/tutorial5.html new file mode 100644 index 00000000..5013a636 --- /dev/null +++ b/tutorials/tutorial5.html @@ -0,0 +1,1532 @@ + + + + + + + + + + + + + + + + + + + + + + Tutorial 5 - RJ45 Node - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Tutorial 5 - RJ45 Node

+

Overview

+

This tutorial will cover connecting CORE VM to a Windows host machine using a RJ45 node.

+

Files

+

Below is the list of files used for this tutorial.

+
    +
  • scenario.xml - the scenario with RJ45 unassigned
  • +
  • scenario.py- grpc script to create the RJ45 in simple CORE scenario
  • +
  • client_for_windows.py - chat app client modified for windows
  • +
+

Running with the Saved XML File

+

This section covers using the saved scenario.xml file to get and up and running.

+
    +
  • +

    Configure the Windows host VM to have a bridged network adapter +
    +

    + +

    +
  • +
  • +

    Make sure the core-daemon is running in a terminal +

    sudop core-daemon
    +

    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
  • +

    Open the scenario.xml with the unassigned RJ45 node +
    +

    + +

    +
  • +
  • +

    Configure the RJ45 node name to use the bridged interface +
    +

    + +

    +
  • +
  • +

    After configuring the RJ45, run the scenario: +
    +

    + +

    +
  • +
  • +

    Double click node n1 to open a terminal and add a route to the Windows host +

    ip route add 192.168.0.0/24 via 10.0.0.20
    +

    +
  • +
  • On the Windows host using Windows command prompt with administrator privilege, add a route that uses the interface + connected to the associated interface assigned to the RJ45 node +
    # if enp0s3 is ssigned 192.168.0.6/24
    +route add 10.0.0.0 mask 255.255.255.0 192.168.0.6
    +
  • +
  • Now you should be able to ping from the Windows host to n1 +
    C:\WINDOWS\system32>ping 10.0.0.20
    +
    +Pinging 10.0.0.20 with 32 bytes of data:
    +Reply from 10.0.0.20: bytes=32 time<1ms TTL=64
    +Reply from 10.0.0.20: bytes=32 time<1ms TTL=64
    +Reply from 10.0.0.20: bytes=32 time<1ms TTL=64
    +Reply from 10.0.0.20: bytes=32 time<1ms TTL=64
    +
    +Ping statistics for 10.0.0.20:
    +    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss)
    +Approximate round trip times in milli-seconds:
    +    Minimum = 0ms, Maximum = 0ms, Average = 0ms
    +
  • +
  • After pinging successfully, run the following in the n1 terminal to start the chatapp server +
    export PATH=$PATH:/usr/local/bin
    +chatapp-server
    +
  • +
  • On the Windows host, run the client_for_windows.py +
    python3 client_for_windows.py -a 10.0.0.20
    +connected to server(10.0.0.20:9001) as client(192.168.0.6:49960)
    +>> .Hello WORLD
    +.Hello WORLD Again
    +.
    +
  • +
  • Observe output on n1 +
    chat server listening on: :9001
    +[server] 192.168.0.6:49960 joining
    +[192.168.0.6:49960] Hello WORLD
    +[192.168.0.6:49960] Hello WORLD Again
    +
  • +
  • When finished, you can stop the CORE scenario and cleanup
  • +
  • On the Windows host remove the added route +
    route delete 10.0.0.0
    +
  • +
+

Running with the gRPC Script

+

This section covers leveraging the gRPC script to get up and running.

+
    +
  • +

    Configure the Windows host VM to have a bridged network adapter +
    +

    + +

    +
  • +
  • +

    Make sure the core-daemon is running in a terminal +

    sudop core-daemon
    +

    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
  • Run the gRPC script in the VM +
    # use the desired interface name, in this case enp0s3
    +/opt/core/venv/bin/python scenario.py enp0s3
    +
  • +
  • +

    In the core-gui connect to the running session that was created +
    +

    + +

    +
  • +
  • +

    Double click node n1 to open a terminal and add a route to the Windows host +

    ip route add 192.168.0.0/24 via 10.0.0.20
    +

    +
  • +
  • On the Windows host using Windows command prompt with administrator privilege, add a route that uses the interface + connected to the associated interface assigned to the RJ45 node +
    # if enp0s3 is ssigned 192.168.0.6/24
    +route add 10.0.0.0 mask 255.255.255.0 192.168.0.6
    +
  • +
  • Now you should be able to ping from the Windows host to n1 +
    C:\WINDOWS\system32>ping 10.0.0.20
    +
    +Pinging 10.0.0.20 with 32 bytes of data:
    +Reply from 10.0.0.20: bytes=32 time<1ms TTL=64
    +Reply from 10.0.0.20: bytes=32 time<1ms TTL=64
    +Reply from 10.0.0.20: bytes=32 time<1ms TTL=64
    +Reply from 10.0.0.20: bytes=32 time<1ms TTL=64
    +
    +Ping statistics for 10.0.0.20:
    +    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss)
    +Approximate round trip times in milli-seconds:
    +    Minimum = 0ms, Maximum = 0ms, Average = 0ms
    +
  • +
  • After pinging successfully, run the following in the n1 terminal to start the chatapp server +
    export PATH=$PATH:/usr/local/bin
    +chatapp-server
    +
  • +
  • On the Windows host, run the client_for_windows.py +
    python3 client_for_windows.py -a 10.0.0.20
    +connected to server(10.0.0.20:9001) as client(192.168.0.6:49960)
    +>> .Hello WORLD
    +.Hello WORLD Again
    +.
    +
  • +
  • Observe output on n1 +
    chat server listening on: :9001
    +[server] 192.168.0.6:49960 joining
    +[192.168.0.6:49960] Hello WORLD
    +[192.168.0.6:49960] Hello WORLD Again
    +
  • +
  • When finished, you can stop the CORE scenario and cleanup
  • +
  • On the Windows host remove the added route +
    route delete 10.0.0.0
    +
  • +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/tutorials/tutorial6.html b/tutorials/tutorial6.html new file mode 100644 index 00000000..f997b0af --- /dev/null +++ b/tutorials/tutorial6.html @@ -0,0 +1,1539 @@ + + + + + + + + + + + + + + + + + + + + + + Tutorial 6 - Improve Visuals - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Tutorial 6 - Improved Visuals

+

Overview

+

This tutorial will cover changing the node icons, changing the background, and changing or hiding links.

+

Files

+

Below is the list of files used for this tutorial.

+
    +
  • drone.png - icon for a drone
  • +
  • demo.py - a mobility script for a node
  • +
  • terrain.png - a background
  • +
  • completed-scenario.xml - the scenario after making all changes below
  • +
+

Running this Tutorial

+

This section will cover running this sample tutorial that develops a scenario file.

+
    +
  • Ensure that /etc/core/core.conf has grpcaddress set to 0.0.0.0
  • +
  • Make sure the core-daemon is running in a terminal +
    sudop core-daemon
    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
+

Changing Node Icons

+
    +
  • +

    Create three MDR nodes +
    +

    + +

    +
  • +
  • +

    Double click on each node for configuration, click the icon and set it to use the drone.png image +
    +

    + +

    +
  • +
  • +

    Use Session -> Options and set Control Network 0 to 172.16.0.0./24

    +
  • +
+

Linking Nodes to WLAN

+
    +
  • Add a WLAN Node
  • +
  • +

    Link the three prior MDR nodes to the WLAN node +
    +

    + +

    +
  • +
  • +

    Click play to start the scenario

    +
  • +
  • +

    Observe wireless links being created +
    +

    + +

    +
  • +
  • +

    Click stop to end the scenario

    +
  • +
  • Right click the WLAN node and select Edit -> Hide
  • +
  • Now you can view the nodes in isolation +
    +

    + +

    +
  • +
+

Changing Canvas Background

+
    +
  • +

    Click Canvas -> Wallpaper to set the background to terrain.png +
    +

    + +

    +
  • +
  • +

    Click play to start the scenario again

    +
  • +
  • You now have a scenario with drone icons, terrain background, links displayed and hidden WLAN node +
    +

    + +

    +
  • +
+

Adding Mobility

+
    +
  • Open and play the completed-scenario.xml
  • +
  • Double click on n1 and run the demo.py script +
    # node id is first parameter, second is total nodes
    +/opt/core/venv/bin/python demo.py 1 3
    +
  • +
  • +

    Let it run to see the link break as the node 1 drone approches the right side +
    +

    + +

    +
  • +
  • +

    Repeat for other nodes, double click on n2 and n3 and run the demo.py script +

    # n2
    +/opt/core/venv/bin/python demo.py 2 3
    +# n3
    +/opt/core/venv/bin/python demo.py 3 3
    +

    +
  • +
  • You can turn off wireless links via View -> Wireless Links
  • +
  • Observe nodes moving in parallel tracks, when the far right is reached, the node will move down + and then move to the left. When the far left is reached, the drone will move down and then move to the right. +
    +

    + +

    +
  • +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/tutorials/tutorial7.html b/tutorials/tutorial7.html new file mode 100644 index 00000000..443941f1 --- /dev/null +++ b/tutorials/tutorial7.html @@ -0,0 +1,1727 @@ + + + + + + + + + + + + + + + + + + + + + + Tutorial 7 - EMANE - CORE Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Tutorial 7 - EMANE

+

Overview

+

This tutorial will cover basic usage and some concepts one may want to +use or leverage when working with and creating EMANE based networks.

+

+ +

+ +

For more detailed information on EMANE see the following:

+ +

Files

+

Below is a list of the files used for this tutorial.

+
    +
  • 2 node EMANE ieee80211abg scenario
      +
    • scenario.xml
    • +
    • scenario.py
    • +
    +
  • +
  • 2 node EMANE ieee80211abg scenario, with n2 running the "Chat App Server" service
      +
    • scenario_service.xml
    • +
    • scenario_service.py
    • +
    +
  • +
+

Running this Tutorial

+

This section covers interactions that can be carried out for this scenario.

+

Our scenario has the following nodes and addresses:

+
    +
  • emane1 - no address, this is a representative node for the EMANE network
  • +
  • n2 - 10.0.0.1
  • +
  • n3 - 10.0.0.2
  • +
+

All usages below assume a clean scenario start.

+

Using Ping

+

Using the command line utility ping can be a good way to verify connectivity +between nodes in CORE.

+
    +
  • Make sure the CORE daemon is running a terminal, if not already +
    sudop core-daemon
    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
  • +

    In the GUI menu bar select File->Open..., then navigate to and select scenario.xml +
    +

    + +

    +
  • +
  • +

    You can now click on the Start Session button to run the scenario +
    +

    + +

    +
  • +
  • +

    Open a terminal on n2 by double clicking it in the GUI

    +
  • +
  • Run the following in n2 terminal +
    ping -c 3 10.0.0.2
    +
  • +
  • You should see the following output +
    PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
    +64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=7.93 ms
    +64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=3.07 ms
    +64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=3.05 ms
    +
    +--- 10.0.0.2 ping statistics ---
    +3 packets transmitted, 3 received, 0% packet loss, time 2000ms
    +rtt min/avg/max/mdev = 3.049/4.685/7.932/2.295 ms
    +
  • +
+

Using Tcpdump

+

Using tcpdump can be very beneficial for examining a network. You can verify +traffic being sent/received among many other uses.

+
    +
  • Make sure the CORE daemon is running a terminal, if not already +
    sudop core-daemon
    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
  • +

    In the GUI menu bar select File->Open..., then navigate to and select scenario.xml +
    +

    + +

    +
  • +
  • +

    You can now click on the Start Session button to run the scenario +
    +

    + +

    +
  • +
  • +

    Open a terminal on n2 by double clicking it in the GUI

    +
  • +
  • Open a terminal on n3 by double clicking it in the GUI
  • +
  • Run the following in n3 terminal +
    tcpdump -lenni eth0
    +
  • +
  • Run the following in n2 terminal +
    ping -c 1 10.0.0.2
    +
  • +
  • You should see the following in n2 terminal +
    tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
    +listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
    +14:56:25.414283 02:02:00:00:00:01 > 02:02:00:00:00:02, ethertype IPv4 (0x0800), length 98: 10.0.0.1 > 10.0.0.2: ICMP echo request, id 64832, seq 1, length 64
    +14:56:25.414303 02:02:00:00:00:02 > 02:02:00:00:00:01, ethertype IPv4 (0x0800), length 98: 10.0.0.2 > 10.0.0.1: ICMP echo reply, id 64832, seq 1, length 64
    +
  • +
+

Running Software

+

We will now leverage the installed Chat App software to stand up a server and client +within the nodes of our scenario.

+
    +
  • Make sure the CORE daemon is running a terminal, if not already +
    sudop core-daemon
    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
  • +

    In the GUI menu bar select File->Open..., then navigate to and select scenario.xml +
    +

    + +

    +
  • +
  • +

    You can now click on the Start Session button to run the scenario +
    +

    + +

    +
  • +
  • +

    Open a terminal on n2 by double clicking it in the GUI

    +
  • +
  • Run the following in n2 terminal +
    export PATH=$PATH:/usr/local/bin
    +chatapp-server
    +
  • +
  • Open a terminal on n3 by double clicking it in the GUI
  • +
  • Run the following in n3 terminal +
    export PATH=$PATH:/usr/local/bin
    +chatapp-client -a 10.0.0.1
    +
  • +
  • You will see the following output in n1 terminal +
    chat server listening on: :9001
    +[server] 10.0.0.1:44362 joining
    +
  • +
  • Type the following in n2 terminal and hit enter +
    hello world
    +
  • +
  • You will see the following output in n1 terminal +
    chat server listening on: :9001
    +[server] 10.0.0.2:44362 joining
    +[10.0.0.2:44362] hello world
    +
  • +
+

Tailing a Log

+

In this case we are using the service based scenario. This will automatically start +and run the Chat App Server on n2 and log to a file. This case will demonstrate +using tail -f to observe the output of running software.

+
    +
  • Make sure the CORE daemon is running a terminal, if not already +
    sudop core-daemon
    +
  • +
  • In another terminal run the GUI +
    core-gui
    +
  • +
  • +

    In the GUI menu bar select File->Open..., then navigate to and select scenario_service.xml +
    +

    + +

    +
  • +
  • +

    You can now click on the Start Session button to run the scenario +
    +

    + +

    +
  • +
  • +

    Open a terminal on n2 by double clicking it in the GUI

    +
  • +
  • Run the following in n2 terminal +
    tail -f chatapp.log
    +
  • +
  • Open a terminal on n3 by double clicking it in the GUI
  • +
  • Run the following in n3 terminal +
    export PATH=$PATH:/usr/local/bin
    +chatapp-client -a 10.0.0.1
    +
  • +
  • You will see the following output in n2 terminal +
    chat server listening on: :9001
    +[server] 10.0.0.2:44362 joining
    +
  • +
  • Type the following in n3 terminal and hit enter +
    hello world
    +
  • +
  • You will see the following output in n2 terminal +
    chat server listening on: :9001
    +[server] 10.0.0.2:44362 joining
    +[10.0.0.2:44362] hello world
    +
  • +
+

Advanced Topics

+

This section will cover some high level topics and examples for running and +using EMANE in CORE. You can find more detailed tutorials and examples at +the EMANE Tutorial.

+
+

Note

+

Every topic below assumes CORE, EMANE, and OSPF MDR have been installed.

+

Scenario files to support the EMANE topics below will be found in +the GUI default directory for opening XML files.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TopicModelDescription
XML FilesRF PipeOverview of generated XML files used to drive EMANE
GPSDRF PipeOverview of running and integrating gpsd with EMANE
PrecomputedRF PipeOverview of using the precomputed propagation model
EELRF PipeOverview of using the Emulation Event Log (EEL) Generator
Antenna ProfilesRF PipeOverview of using antenna profiles in EMANE
+

gRPC Python Scripts

+

You can also run the same steps above, using the provided gRPC script versions of scenarios. +Below are the steps to run and join one of these scenario, then you can continue with +the remaining steps of a given section.

+
    +
  1. Make sure the CORE daemon is running a terminal, if not already +
    sudop core-daemon
    +
  2. +
  3. From another terminal run the tutorial python script, which will create a session to join +
    /opt/core/venv/bin/python scenario.py
    +
  4. +
  5. In another terminal run the CORE GUI +
    core-gui
    +
  6. +
  7. You will be presented with sessions to join, select the one created by the script +
    +

    + +

    +
  8. +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file