OpenStack Juno – OpenDaylight Helium SR2 integration over Ubuntu 14.04 (LTS) using GRE Tunnels

This guide describes in detail the steps needed for integrating OpenStack Juno with Neutron ML2 networking plugin with OpenDaylight Helium SR2 using GRE Tunnels. Be careful to replace <SOMETHING HERE> with the appropriate values.

Also it is important to know that one OpenDaylight manages only one OpenStack deployment.

The guide consists of 9 sections.

  1. Prerequisites
  2. Erase all instances, networks, routers and ports in the Controller Node
  3. Configure OpenvSwitches in Network and Compute Nodes
  4. Configure ml2_conf.ini in all Nodes
  5. Configure Neutron Database in the Controller Node
  6. Create Initial Networks in the Controller Node
  7. Launch Instances in the Controller Node
  8. Verify Everything
  9. Troubleshooting
  10. Resources

If you need help you can contact me at chsakkas@iit.demokritos.gr

1. Prerequisites

You must have a working OpenStack Juno deployment in Ubuntu 14.04 (LTS). To install it use the official guide provided by the OpenStack community,available here. It is mandatory to install everything until Chapter 6 (Network Component with Neutron). Installing Chapter 7 (Dashboard Horizon)  is recommended.

The networks required are:

  • Management network 10.0.0.0/24
  • Tunnel Network 10.0.1.0/24
  • External Network 203.0.113.0/24

The OpenStack nodes required for this guide are:

  • Controller node: Management Network, (External Network if you want public access to the controller)
  • Network node: Management Network, External Network
  • Compute node 1: Management Network, Tunnel Network
  • Compute node 2: Management Network, Tunnel Network

If you have followed the official document you should have them already.

Additionally you must have OpenDaylight Helium SR2 installed in the Management Network. You MUST install it in a separate machine.

We want OpenDaylight to communicate with OpenFlow 1.3.

Edit etc/custom.properties and uncomment line ovsdb.of.version=1.3

Then start OpenDaylight and connect to the console.

Now you are connected to OpenDaylight’s console. Install all the required features.

Wait for the feature installation to finish.

To verify that everything is working use the following. An empty network list should be returned.

If you want to monitor OpenDaylight there are 2 log files.

 

2. Erase all instacnes, networks, routers and ports in the Controller Node

You must delete all existing instances, networks, routers and ports from all tenants. Default installation has admin and demo.

If you want, you can do it from Horizon dashboards or use the following commands.

Do the same with demo-openrc

If some ports cannot be deleted do the following:

Verify that everything is empty.

Stop the neutron-server service for the duration of the configuration.

A message saying that the neutron-server is stopped should appear. If not press it again to make sure it is stopped.

 

3. Configure OpenvSwitches in Network and Compute Nodes

The neutron plugin in every node must be removed (or stopped and disabled) because only OpenDaylight will be controlling openvswitches.

Clear openvswitch database and start it again.

The last command must return an empty openvswitch. You should see only <OPENVSWITCH ID> and version.

Use the following command to configure tunnel end-points.

Nothing will appear if this command is entered correctly. To verify the configuration you can use:

ONLY NETWORK NODE SECTION START

Create the bridge br-ex that is needed for the external network for OpenStack.

ONLY NETWORK NODE SECTION END

Connect every openvswitch with the OpenDaylight controller.

If everything went ok you can see 4 switches in OpenDaylight. 3 br-int and 1 br-ex.

 

4. Configure ml2_conf.ini in all Nodes

Controller Node

Edit vi /etc/neutron/plugins/ml2/ml2_conf.ini and put the following configuration.

Network Node

Edit vi /etc/neutron/plugins/ml2/ml2_conf.ini and put the following configuration.

Compute Nodes

Edit vi /etc/neutron/plugins/ml2/ml2_conf.ini and put the following configuration.

 

5. Configure Neutron Database in the Controller Node

Reset the neutron database, in order to be configured with OpenDaylight.

If everything is ok, without errors you can start the neutron-server.

 

6. Create Initial Networks in the Controller Node

 

7. Launch Instances in the Controller Node

Get preferred <HYPERVISOR NAME> from the command below.

Get demo <NETWORK ID> from the command below.

Get <IMAGE NAME> from the command below.

Launch the instances!

 

8. Verify Everything

If everyting works correctly you will be able to ping every VM.

Also you should be able to see the gre tunnels from ovs-vsctl show in each node.

 

9. Troubleshooting

If networking between VMs is not working after a while, try to restart OpenvSwitch in Network and Compute Nodes.

 

 

10. Resources

 

Acknowledgments

This work was done in the frame of FP7 T-NOVA EU project.

  • #1 written by Tuan Thai
    about 2 years ago

    Is there tunnel interface in neutron node? I follow strictly the guide, but there are no br-ints at the end of step 3.

  • #2 written by Yao Lin
    about 2 years ago

    Why do you need ml2_conf.ini file in the network and compute nodes? With ODL, there is no neutron-plugin-openvswitch-agent running in these nodes and thus ml2.conf.ini is not used.

    I am using Icehouse + SR3. It fails to create ext-net. Any clue?

    Thanks.

  • #3 written by Yao Lin
    about 2 years ago

    Another question. I don’t see –shared option with “neutron net-create ext-net”. I am not sure what this implies. Maybe only 1 tenant can talk to the internet?

  • #4 written by RaviRajan
    about 2 years ago

    Hi,
    I am using Rhel7 & juno+ SR2.
    I dont find neutron-plugin-openvswitch-agent.
    What shd be done.
    Thanks in advance.

  • #5 written by Sara
    about 2 years ago

    Hello,

    I follow strictly the guide too, but there are no br-ints at the end of step 3.

    Can you please help me to solve the problem ?

    Thanks

  • #6 written by Adhiraj Singh
    about 1 year ago

    Please take a look my ODL log I am able to create VM but its showing error. Please do needful…

    org.opendaylight.ovsdb.openstack.net-virt – 1.1.1.Lithium-SR1 | enqueueEvent: evenDispatcher: org.opendaylight.ovsdb.openstack.netvirt.impl.EventDispatcherImpl@4ef25bfb – NorthboundEvent [handler=NEUTRON_PORT, action=DELETE, port=NeutronPort [portUUID=d19247cd-9c86-48be-bbf9-4a458a4aa554, networkUUID=6f74cc7b-3d3a-4fdb-b50a-5df06ca5c938, name=, adminStateUp=true, status=null, macAddress=FA:16:3E:53:26:6A, fixedIPs=[Neutron_IPs{ipAddress=’192.168.1.7′, subnetUUID=’37183ee5-990c-45fb-9ec3-124c69380858′}], deviceID=568f287e-3a54-4f90-b653-922cbdd8fc2a, deviceOwner=compute:nova, tenantID=b68a772387b34ff4ab2e60a460cda5c5, floatingIPMap={}, securityGroups=[NeutronSecurityGroup{securityGroupUUID=’73b60679-58cd-4dc5-a1ec-dc69ec11cdb4′, securityGroupName=’default’, securityGroupDescription=’default’, securityGroupTenantID=’b68a772387b34ff4ab2e60a460cda5c5′, securityRules=[NeutronSecurityRule{securityRuleUUID=’1bff0c0d-4003-4ce8-8a1a-f1be0319aeef’, securityRuleDirection=’ingress’, securityRuleProtocol=’null’, securityRulePortMin=null, securityRulePortMax=null, securityRuleEthertype=’IPv6′, securityRuleRemoteIpPrefix=’null’, securityRemoteGroupID=73b60679-58cd-4dc5-a1ec-dc69ec11cdb4, securityRuleGroupID=’73b60679-58cd-4dc5-a1ec-dc69ec11cdb4′, securityRuleTenantID=’b68a772387b34ff4ab2e60a460cda5c5′}, NeutronSecurityRule{securityRuleUUID=’a3f651b3-4622-431a-8fe6-5975d6cdd6b7′, securityRuleDirection=’egress’, securityRuleProtocol=’null’, securityRulePortMin=null, securityRulePortMax=null, securityRuleEthertype=’IPv4′, securityRuleRemoteIpPrefix=’null’, securityRemoteGroupID=null, securityRuleGroupID=’73b60679-58cd-4dc5-a1ec-dc69ec11cdb4′, securityRuleTenantID=’b68a772387b34ff4ab2e60a460cda5c5′}, NeutronSecurityRule{securityRuleUUID=’f819fc7a-cf3b-4054-a320-41d50e0b9394′, securityRuleDirection=’ingress’, securityRuleProtocol=’null’, securityRulePortMin=null, securityRulePortMax=null, securityRuleEthertype=’IPv4′, securityRuleRemoteIpPrefix=’null’, securityRemoteGroupID=73b60679-58cd-4dc5-a1ec-dc69ec11cdb4, securityRuleGroupID=’73b60679-58cd-4dc5-a1ec-dc69ec11cdb4′, securityRuleTenantID=’b68a772387b34ff4ab2e60a460cda5c5′}, NeutronSecurityRule{securityRuleUUID=’f958122a-34a1-485f-bb66-523ddd8daeb4′, securityRuleDirection=’egress’, securityRuleProtocol=’null’, securityRulePortMin=null, securityRulePortMax=null, securityRuleEthertype=’IPv6′, securityRuleRemoteIpPrefix=’null’, securityRemoteGroupID=null, securityRuleGroupID=’73b60679-58cd-4dc5-a1ec-dc69ec11cdb4′, securityRuleTenantID=’b68a772387b34ff4ab2e60a460cda5c5′}]]], bindinghostID=compute1, bindingvnicType=normal, bindingvnicType=normal], subnet=null, router=null, routerInterface=null, floatingIP=null, network=null, loadBalancer=null, loadBalancerPool=null, loadBalancerPoolMember=null]

  • #7 written by ganpiste
    about 1 year ago

    At the end we can use the following link to see brigdes interfaces: http: // OPENDAYLIGHT INTERFACE IP: 8181 / DLUX / index.html. but I get i 4 br int and 1 br ex, and created instances are not able to ping someone could help me by giving me the network configuration files of each of the nodes.

No trackbacks yet.