Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • cloud/documentation
  • 242618/documentation
  • 469240/documentation
  • LukasD/documentation
  • 35475/documentation
  • 323969/documentation
6 results
Show changes
Showing
with 488 additions and 0 deletions
content/cloud/network/images/allocate-fip.png

60.1 KiB

content/cloud/network/images/associate-fip.png

61.4 KiB

content/cloud/network/images/attach_interface.png

44.5 KiB

content/cloud/network/images/clear-router1.png

42.1 KiB

content/cloud/network/images/f1.png

21.4 KiB

content/cloud/network/images/instance1.png

78 KiB

content/cloud/network/images/ipv6_attach.png

58 KiB

content/cloud/network/images/network_floating_ip.png

203 KiB

content/cloud/network/images/network_routers-group.png

250 KiB

content/cloud/network/images/network_secutity_groups_egress.png

125 KiB

content/cloud/network/images/r1.png

20.2 KiB

content/cloud/network/images/r2.png

42.1 KiB

content/cloud/network/images/r3.png

29.2 KiB

content/cloud/network/images/set-router1.png

40.7 KiB

content/cloud/network/images/set-router2.png

66.4 KiB

---
title: "Networking"
date: 2022-02-02T11:22:35+02:00
draft: false
---
{{< hint danger >}}
**Please read the following rules:**
1. If you are using a [PERSONAL](/cloud/register/#personal-project) project you have to use either the `147-251-115-pers-proj-net` or `78-128-250-pers-proj-net` network to make your instance accessible from an external network (e.g. Internet). Use `public-muni-147-251-115-PERSONAL` or `public-cesnet-78-128-250-PERSONAL` for FIP allocation.
2. If you are using a [GROUP](/cloud/register/#group-project) project you may choose any of the [`-GROUP`](/cloud/network/#ipv4-group-networking) suffix networks for FIP allocation to make your instance accessible from external network (e.g. Internet).
3. Violation of network usage may lead to resource removal and reduction of the quotas assigned.
{{< /hint >}}
## Public networking
In MetaCentrum Cloud (MCC) we support both IPv4 and IPv6. IPv4 allocation policies are based on Floating IPs (FIP). This type of networking requires the user to first connect virtual network containing specific VM to the public network before allocating a FIP for specific VM. Further information is available in section [virtual networking](/cloud/network/#virtual-networking). IPv6 allocation policy is based on common IPv6 public network, which can be directly attached to VMs.
If you decide to attach second interface to your VM, you should verify the interface is correctly set. Older VM images have secondary interfaces down by default and some images need further configuration to enable IPv6 SLAAC.
{{< hint info >}}
Don't forget to setup security groups accordingly.
{{< /hint >}}
### IPv4 personal networking
Is currently limited to the common internal networks. You can start your machine in network `78-128-250-pers-proj-net` or `147-251-115-pers-proj-net` and allocate floating IP address from pools `public-cesnet-78-128-250-PERSONAL` and `public-muni-147-251-115-PERSONAL` respectively. All VMs need to be connected to the same network. You cannot use virtual routers with personal projects. We encourage users to also use IPv6 addresses for long term use. Unassigned allocated addresses are released daily.
### IPv4 group networking
The situation is rather different for group projects. You cannot use the same approach as for personal projects. You should create a virtual network as described in section [virtual networking](/cloud/network/#virtual-networking) instead and select one of the pools with `-GROUP` suffix. Namely:
- `public-cesnet-78-128-251-GROUP`
- `public-cesnet-195-113-167-GROUP`
- `public-muni-147-251-21-GROUP`
- `public-muni-147-251-124-GROUP`
- `public-muni-147-251-255-GROUP`
{{< hint danger >}}
Addresses that are unassigned for longer than 3 months can be released.
{{< /hint >}}
{{< hint info >}}
If you use a MUNI account, you can use `private-muni-10-16-116` and log into the network via MUNI VPN or you can set up Proxy networking, which is described in section [proxy networking](/cloud/network/#proxy-networking).
{{< /hint >}}
### IPv6 networking
We have prepared an IPv6 prefix `public-muni-v6-432`, which is available for both personal and group projects. The network is available as an attachable network for VMs. If your VM does not receive the allocated address, check section [obtaining IPv6 address](/cloud/network/#obtaining-ipv6-address).
***
## Virtual networking
MetaCentrum Cloud offers software-defined networking as one of its services. Users can create their own
networks and subnets, connect them with routers and set up tiered network topologies.
Prerequisites:
* Basic understanding of routing
* Basic understanding of TCP/IP
For details, refer to [the official documentation](https://docs.openstack.org/horizon/train/user/create-networks.html).
### Network creation
For a group project, you need to create an internal network first, you may use auto allocated pool for subnet auto-creation.
{{< expand "Configuration using Horizon GUI" >}}
Navigate yourself towards **Network &gt; Networks** in the left menu and click on the **Create Network** on the right side of the window. This will start an interactive dialog for network creation.
![](images/1.png)
![](images/2.png)
Inside the interactive dialog:
1. Type in the network name
![](images/3.png)
2. Move to the **Subnet** section either by clicking next or by clicking on the **Subnet** tab. You may choose to enter the network range manually (recommended for advanced users to not interfere with the public IP address ranges), or select **Allocate Network Address from a pool**. In the **Address pool** section select a `private-192-168`. Select Network mask which suits your needs (`27` as default can hold up to 29 machines, use IP calculator if you are not sure).
![](images/4.png)
3. For the last tab **Subnet Details** just check that a DNS is present and the DHCP box is checked, alternatively you can create the allocation pool or specify static routes in here (for advanced users).
![](images/5.png)
{{< /expand >}}
{{< expand "Configuration using CLI" >}}
**Create network**
```
openstack network create my-net1
```
Additional network configuration is available in [official CLI documentation](https://docs.openstack.org/python-openstackclient/train/cli/command-objects/network.html).
**Create subnet for the network (from auto-allocated pool)**
```
openstack subnet create --network my-net1 --subnet-pool private-192-168 my-sub1
```
**Create subnet for the network (from auto-allocated pool)**
```
openstack subnet create --network my-net1 --subnet-range 192.168.0.0/24 my-sub1
```
Additional subnet configuration is available in [official CLI documentation](https://docs.openstack.org/python-openstackclient/train/cli/command-objects/subnet.html).
{{< /expand >}}
### Router creation
{{< expand "Configuration using Horizon GUI" >}}
Navigate yourself towards **Network &gt; Routers** in the left menu and click on the **Create Router** on the right side of the window.
In the interactive dialog:
1. Enter router name and select external gateway with the `-GROUP` suffix.
![](images/r1.png)
Now you need to attach your internal network to the router.
1. Click on the router you just created.
2. Move to the **Interfaces** tab and click on the **Add interface**.
![](images/r2.png)
3. Select a previously created subnet and submit.
![](images/r3.png)
{{< /expand >}}
{{< expand "Configuration using CLI" >}}
**Create router**
```
openstack router create my-router1
```
The current router has no ports, which makes it pretty useless, we need to create at least 2 interfaces ([external](/cloud/network/#router-gateway-assign) and internal).
**Assign router as a gateway for created internal network**
```
openstack router add sub1 my-subnet my-router1
```
Additional router configuration is available in [official CLI documentation](https://docs.openstack.org/python-openstackclient/train/cli/command-objects/router.html).
{{< /expand >}}
{{< hint info >}}
Routers can also be used to route traffic between internal networks. This is an advanced topic not covered in this guide.
{{< /hint >}}
### Router external gateway assign
If you have no gateway on you router, you can assign a new one.
{{< expand "Configuration using Horizon GUI" >}}
1. You can set your gateway by clicking **Set Gateway**.
![](images/set-router1.png)
2. Choose the network you desire to use (e.g. **public-cesnet-78-128-251**) and confirm.
![](images/set-router2.png)
{{< /expand >}}
{{< expand "Configuration using CLI" >}}
**Set external network for the router (let us say public-muni-147-251-255-GROUP), and the external port will be created automatically**
```
openstack router set --external-gateway public-muni-147-251-255-GROUP my-router1
```
Additional router configuration is available in [official CLI documentation](https://docs.openstack.org/python-openstackclient/train/cli/command-objects/router.html).
{{< /expand >}}
### Router external gateway release
{{< expand "Configuration using Horizon GUI" >}}
Navigate to the **Project &gt; Network &gt; Routers**. Click on the action **Clear Gateway** of your router. This action will disassociate the external network from your router, so your machines will no longer be able to access the Internet. If you get an error you need to first **Disassociate Floating IPs**.
![](images/clear-router1.png)
{{< /expand >}}
{{< expand "Configuration using CLI" >}}
**Release external gateway from router**
```
openstack router unset --external-gateway my-router1
```
Make sure to first [release FIPs](/cloud/network/#release-floating-ips) from the network.
Additional router configuration is available in [official CLI documentation](https://docs.openstack.org/python-openstackclient/train/cli/command-objects/router.html).
{{< /expand >}}
### Associate Floating IPs
Floating IPs are used to assign public IP address to VMs.
{{< expand "Configuration using Horizon GUI" >}}
1. Go to **Project &gt; Network &gt; Floating IPs** and click on the **Allocate IP to Project** button. Select **Pool** with the same value as the network you chose in the previous step and confirm it by clicking **Allocate IP**.
![](images/allocate-fip.png)
2. Now click on the **Associate** button next to the Floating IP you just created. Select **Port to be associated** with the desired instance. Confirm with the **Associate** button. Repeat this section for all your machines requiring a Floating IP.
![](images/associate-fip.png)
{{< /expand >}}
{{< expand "Configuration using CLI" >}}
**Allocate new Floating IPs**
```
openstack floating ip create public-cesnet-78-128-251
```
**And assign it to your server**
```
openstack server add floating ip net-test1 78.128.251.27
```
Additional floating IP configuration is available in [official CLI documentation](https://docs.openstack.org/python-openstackclient/train/cli/command-objects/floating-ip.html).
{{< /expand >}}
{{< hint info >}}
The floating IP address must be from the same network pool which was selected as the router network gateway.
{{< /hint >}}
### Release Floating IPs
{{< expand "Configuration using Horizon GUI" >}}
Go to **Project &gt; Compute &gt; Instances**. Click on the menu **Actions** on the instance you wish to change and **Disassociate Floating IP** and specify that you wish to **Release Floating IP**.
WARNING: After this action, your project will no longer be able to use the floating IP address you released. Confirm that you wish to disassociate the floating IP by clicking on the **Disassociate** button.
![](images/instance1.png)
{{< /expand >}}
{{< expand "Configuration using CLI" >}}
### Remove existing floating IP
**List your servers**
```
$ openstack server list
+--------------------------------------+-----------+--------+-------------------------------------------------------+-------+----------------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-----------+--------+-------------------------------------------------------+-------+----------------+
| 1a0d4624-5294-425a-af37-a83eb0640e1c | net-test1 | ACTIVE | auto_allocated_network=192.168.8.196, 147.251.124.248 | | standard.small |
+--------------------------------------+-----------+--------+-------------------------------------------------------+-------+----------------+
```
**Remove floating IPs**
```
$ openstack server remove floating ip net-test 147.251.124.248
$ openstack floating ip delete 147.251.124.248
```
Additional floating IP configuration is available in [official CLI documentation](https://docs.openstack.org/python-openstackclient/train/cli/command-objects/floating-ip.html).
{{< /expand >}}
### Obtaining IPv6 address
Public IPv6 addresses are assigned via SLAAC. After assigning an interface in OpenStack to your instance, verify correct [configuration](/cloud/network/#interface-not-working) of your VM. You can assign interface by directly connecting your VM to the network (make sure you setup DNS records if you decide to use only IPv6) upon creation or by assigning secondary interface.
{{< hint danger >}}
Don't forget to update your [Security Groups](/cloud/network/#security-rules).
{{< /hint >}}
{{< expand "Configuration using Horizon GUI" >}}
Go to **Project &gt; Compute &gt; Instances**. Click on the menu **Actions** on the instance you wish to change and click on **Attach interface**.
![](images/attach_interface.png)
In the **Network** dropdown menu select available IPv6 network.
![](images/ipv6_attach.png)
{{< /expand >}}
{{< expand "Configuration using CLI" >}}
**Get ID of your VM, in this instance named my-vm**
```
VM_ID=$(openstack server list --name my-vm -f value -c ID)
```
**Create port for the network**
```
openstack port create --network public-muni-v6-432 --security-group default --host ${VM_ID} ipv6-port
```
Additional port configuration is available in [official CLI documentation](https://docs.openstack.org/python-openstackclient/train/cli/command-objects/port.html).
{{< /expand >}}
### Security rules
Security rules in OpenStack serve as a Firewall. Security rules are applied directly on VM ports and therefore proper configuration is necessary. Ingress as well as egress rules can be configured using Horizon and CLI. If you can't connect via ssh or ping your instance, chances are it is because of security rules.
If you delete default egress rules, your virtual machine loses will not be able to send outgoing communication. To fix this, add a new egress rule with *any* IP protocol and port range, set Remote IP prefix to *0.0.0.0/0* (IPv4) or *::/0* (IPv6).
{{< expand "Configuration using Horizon GUI" >}}
![](images/network_secutity_groups_egress.png)
{{< /expand >}}
{{< expand "Configuration using CLI" >}}
**Create rule**
```
openstack security group create my-security-group
```
**Add rules to your security group**
```
openstack security group rule create --description "Permit SSH" --remote-ip 0.0.0.0/0 --protocol tcp --dst-port 22 --ingress my-security-group
openstack security group rule create --description "Permit SSH IPv6" --remote-ip ::/0 --ethertype IPv6 --protocol tcp --dst-port 22 --ingress my-security-group
openstack security group rule create --description "Permit ICMP (any)" --remote-ip 0.0.0.0/0 --protocol icmp --icmp-type -1 --ingress my-security-group
openstack security group rule create --description "Permit ICMPv6 (any)" --remote-ip ::/0 --ethertype IPv6 --protocol ipv6-icmp --ingress my-security-group
```
**Verify rule**
```
openstack security group show my-security-group
```
Additional security group configuration is available in [official CLI documentation](https://docs.openstack.org/python-openstackclient/train/cli/command-objects/security-group.html).
{{< /expand >}}
***
## Load balancers
Load balancers serve as a proxy between virtualised infrastructure and clients in the outside network. This is essential in OpenStack since it can be used in a scenario where the infrastructure dynamically starts new VMs and adds them into the load balancing pool in order to mitigate inaccessibility of services.
When modifying a load balancer, each operation changes the database into immutable state. It is therefore recommended to use `--wait` switch when creating/editing or removing resources from load balancers.
{{< hint info >}}
We are currently observing inaccessibility of some load balancers on floating IP after creation. If this happens, please try to rebuild the load balancer before contacting support.
{{< /hint >}}
### Provisioning Status
This status represents the overall state of the load balancer backend.
- `ACTIVE`: the load balancer backend is working as intended.
- `PENDING`: statuses starting with `PENDING` usually reflect modification of the load balancer, during which the database is in immutable state and thus any additional operations will fail.
- `ERROR`: the provisioning has failed. This load balancer can't be modified and usually is not working. Therefore we encourage our users to remove these load balancers. If this happens more often, please make sure to report this problem at `cloud@metacentrum.cz`.
- `DELETED`: entity has been deleted.
### Operating status
Operating status is managed by health monitor service of the load balancer and reflects the availibility of endpoint service.
- `ONLINE`: all endpoint services are available.
- `DEGRADED`: some endpoint services are not available.
- `ERROR`: all endpoint services are unavailable.
- `DRAINING`: not accepting new connections.
- `OFFLINE`: entity is administratively disabled.
- `NO_MONITOR`: health monitor is not configured.
### Creating loadbalancers
To create a load balancer, first prepare a pool of VMs with operational service you wish to balance to. Next create the load balancer in the same network and assaign the pool as well as listeners on specific ports.
{{< expand "Configuration using CLI" >}}
1. Create the load balancer
```
openstack loadbalancer create --name my_loadbalancer --vip-subnet-id my_subnet_id --wait
```
2. Create listeners (eg. ports 80)
```
openstack loadbalancer listener create --name my_listener --protocol TCP --protocol-port 80 --wait my_loadbalancer
```
3. Create LB pools
```
openstack loadbalancer pool create --name my_pool --lb-algorithm ROUND_ROBIN --listener my_listener --protocol TCP --wait
```
4. Create Health Monitors
```
openstack loadbalancer healthmonitor create --delay 5 --max-retries 3 --timeout 3 --type HTTP --url-path / --wait my_pool
```
5. Assign endpoint VMs
```
openstack loadbalancer member create --address vm_ip_address --protocol-port 80 --wait my_pool
```
{{< /expand >}}
### Deleting loadbalancers
When deleting a loadbalancer, first unassign the floating IP address used by the loadbalancer.
{{< expand "Configuration using CLI" >}}
To delete the loadbalancer and all resources, run command
```
openstack loadbalancer delete --cascade --wait my_loadbalancer
```
{{< /expand >}}
## Scenarios
### Creating new networking
Creation of new networking for project can be divided into these steps:
- [Create new network and subnet](/cloud/network/#network-creation).
- [Create router and assign interface](/cloud/network/#router-creation).
- [Assign external gateway](/cloud/network/#router-gateway-assign).
- [Assign FIPs to VMs](/cloud/network/#associate-floating-ips).
### Changing external network
In order to correctly migrate to different external network, you can follow the following steps:
- [Release all Floating IPs](/cloud/network/#release-floating-ips).
- [Clear router gateway](/cloud/network/#router-gateway-release).
- [Assign router gateway into selected external network](/cloud/network/#router-gateway-assign).
- [Allocate and assign new FIPs from selected external network](/cloud/network/#associate-floating-ips).
### Proxy networking
In your OpenStack instances, you can use private or public networks. If you use a private network and you need to access the internet for updates etc.,
you can check [proxy issues](/cloud/faq/#issues-with-proxy-in-private-networks), where proxy connection is explained.
### Interface not working
Please verify correct configuration of security groups on your VM. More information is available in section [security rules](/cloud/network/#security-rules).
Some VM images have additional interfaces turned down by default. In this case, it is necessary to connect to the VM through default interface and enable these interfaces.
Known images with this flaw:
- `centos-7-x86_64`
- `ubuntu-bionic-x86_64`
Usually when you enable the interface, the VM should obtain IPv4 address through DHCP and IPv6 address through SLAAC. If you are able to receive an IPv4 address but not IPv6 address, verify correct configuration of SLAAC on that VM interface. This flaw was spotted on image:
- `centos-8-x86_64`
---
title: "News"
date: 2021-05-18T11:22:35+02:00
draft: false
disableToc: true
GeekdocHidden: true
---
**2022-04-04** OpenStack loadbalancer Octavia reconfigured to increase stability and add support for [loadbalancer HA (amphorae active/standby) mode](https://docs.openstack.org/octavia/latest/contributor/specs/version0.8/active_passive_loadbalancer.html). Remaining issues with LBaaS component were addressed.
**2022-03-14** New public networks added to OpenStack:
* IPv4 group network: `public-cesnet-195-113-167-GROUP`
* IPv4 personal network: `public-muni-147-251-115-PERSONAL`
* IPv6 network: `public-muni-v6-432`
Additional information is available on [Networking](/cloud/network/) page.
**2022-03-07**
1. OpenStack cloud security review and related improvements
2. [Centos 8 cloud images are going to be deprecated in comming weeks](https://www.centos.org/centos-linux-eol/) in favor of Almalinux 8
**2022-02-21** Openstack cloud internal Monasca monitoring services were replaced by Prometheus, Thanos & Alertmanager.
**2021-12-13**
1. Coref cluster was handed over, it is not ready for use yet
2. Automatic image rotation mechanism was added.
**2021-12-06** Monasca software update.
**2021-10-27** Upgrade cloud infrastructure proxy to Traefik 2.5, related issues resolved. (Openstack VM instance console not available)
**2021-09-07** New cloud infrastructure monitoring based on prometheus.io technologies added.
**2021-06-19** Flavors *hpc.xlarge*, *hpc.18core-48ram* and *hpc.16core-128ram* have parameters *IOPS*, *net throughput* and *disk throughput* set as **Unlimited**.
**2021-05-21** Flavor list was created and published. Also parameters of following flavors were changed:
* hpc.8core-64ram
* hpc.8core-16ram
* hpc.16core-32ram
* hpc.18core-48ram
* hpc.small
* hpc.medium
* hpc.large
* hpc.xlarge
* hpc.xlarge-memory
* hpc.16core-128ram
* hpc.30core-64ram
* hpc.30core-256ram
* hpc.ics-gladosag-full
* csirtmu.tiny1x2
None of the parameters were decreased but increased. Updated parameters were net throughput, IOPS, and disk throughput. Existing instances will have the previous parameters so if you want to get new parameters, **make a data backup** and rebuild your instance You can checklist of flavors [here](/cloud/flavors).
**2021-04-13** OpenStack image `centos-8-1-1911-x86_64_gpu` deprecation in favor of `centos-8-x86_64_gpu`. The deprecated image will be still available for existing VM instances but will be moved from public to community images in about 2 months.
**2021-04-05** OpenStack images renamed
**2021-03-31** User documentation update
**2020-07-24** Octavia service (LBaaS) released
**2020-06-11** [Public repository](https://gitlab.ics.muni.cz/cloud/cloud-tools) where Openstack users can find usefull tools
**2020-05-27** Openstack was updated from `stein` to `train` version
**2020-05-13** Ubuntu 20.04 LTS (Focal Fossa) available in image catalog
**2020-05-01** Released [Web page](https://projects.brno.openstack.cloud.e-infra.cz/) for requesting Openstack projects
---
title: "Project Expiration Policy"
date: 2021-06-18T11:22:35+02:00
draft: false
disableToC: true
GeekdocHidden: true
---
Every group project has its expiration date. The date is set when creating the project. Expired project are to be disabled and its data later removed.
When the project expires, we will contact its owner who can either extend project's expiration date or confirm its expiration. You have *two weeks* to respond to this e-mail. After *two weeks* the project will be disabled. After another *month* all resources including data will be removed permanently.
content/cloud/putty/images/pageant-add-key.png

69.3 KiB

content/cloud/putty/images/putty-connect2instance.png

302 KiB