From 5faf1c193843efb660c89d2b2984611ae8c82bcc Mon Sep 17 00:00:00 2001
From: Jan Siwiec <jan.siwiec@vsb.cz>
Date: Mon, 6 Mar 2023 11:25:11 +0100
Subject: [PATCH] title capitalization fix

---
 .spelling                                     | 10 ++++++
 README.md                                     |  6 ++--
 main/compute/index.md                         |  6 ++--
 main/index.md                                 |  2 +-
 .../contributing/set-up-and-work-locally.md   |  6 ++--
 .../contributing/work-within-gitlab-ui.md     |  5 ++-
 topics/about-us/docs/contribute/index.md      | 17 +++++-----
 .../docs/contribute/style-guide/mkdocs-101.md |  2 +-
 .../docs/contribute/style-guide/style.md      |  6 ++--
 .../docs/contribute/style-guide/test-rules.md |  2 +-
 .../style-guide/writing-practices.md          |  2 +-
 .../docs/contribute/technical-details.md      | 34 ++++++++++---------
 topics/account/docs/access.md                 |  2 +-
 topics/account/docs/mfa/index.md              |  4 +--
 topics/account/docs/mfa/perform.md            |  4 +--
 topics/compute/concepts/docs/comparison.md    |  6 ++--
 topics/compute/concepts/docs/hw.md            |  5 +--
 topics/compute/concepts/docs/index.md         |  6 ++--
 .../docs/use-cases/mmci-muni-bbmri.md         |  5 ++-
 .../concepts/docs/use-cases/muni-kypo.md      |  7 ++--
 topics/compute/grid/docs/index.md             |  2 +-
 .../kubernetes/docs/apps/owncloud/index.md    |  6 ++--
 .../docs/get-started/hello-world/index.md     | 33 ++++++++++++++----
 .../kubernetes/docs/get-started/index.md      |  2 +-
 .../container_build/building_containers.md    | 20 +++++++----
 .../docs/gitops/git_agent/agentk.md           |  8 ++---
 .../docs/additional-information/about.md      |  6 ++--
 .../concepts-of-cloud-computing.md            |  2 +-
 .../additional-information/custom-images.md   | 24 ++++++-------
 .../additional-information/gpu-computing.md   |  2 +-
 .../ip-allocation-policy.md                   |  6 ++--
 .../ipv6-troubleshooting.md                   | 10 +++---
 .../additional-information/object-storage.md  |  7 ++--
 .../docs/additional-information/register.md   |  4 +--
 .../terms-of-service.md                       |  2 +-
 .../using-cloud-tools.md                      |  3 +-
 .../virtual-networking.md                     |  2 +-
 .../docs/additional-information/windows.md    |  3 +-
 .../creating-first-infrastructure.md          |  2 +-
 .../docs/getting-started/creating-project.md  |  2 +-
 .../docs/how-to-guides/accessing-instances.md |  6 ++--
 .../how-to-guides/allocating-floating-ips.md  |  2 +-
 .../docs/how-to-guides/attaching-interface.md |  4 +--
 .../how-to-guides/attaching-remote-storage.md |  2 +-
 .../how-to-guides/changing-vm-resources.md    |  4 +--
 .../docs/how-to-guides/create-networking.md   |  6 ++--
 .../how-to-guides/deploying-loadbalancers.md  |  7 ++--
 .../high-availability-deployment.md           |  2 +-
 .../maintaining-cloud-resources.md            |  2 +-
 .../docs/how-to-guides/manage-volumes.md      | 14 ++++----
 .../docs/how-to-guides/obtaining-api-key.md   |  2 +-
 .../docs/how-to-guides/using-backups.md       |  2 +-
 .../using-custom-linux-images.md              |  2 +-
 .../how-to-guides/using-object-storage.md     |  2 +-
 .../technical-reference/cloud-resources.md    | 28 ++++++---------
 .../docs/technical-reference/data-storage.md  |  2 +-
 .../docs/technical-reference/get-support.md   |  2 +-
 .../technical-reference/image-rotation.md     |  2 +-
 .../openstack-management.md                   |  2 +-
 .../technical-reference/openstack-modules.md  |  2 +-
 .../technical-reference/openstack-status.md   |  2 +-
 .../docs/technical-reference/quota-limits.md  | 20 +++++++----
 .../docs/technical-reference/remote-access.md | 24 ++++++-------
 .../service-level-indicators.md               |  2 +-
 .../docs/technical-reference/volume-usage.md  |  2 +-
 topics/compute/sensitive/docs/get-project.md  |  7 ++--
 topics/compute/sensitive/docs/index.md        |  9 ++---
 .../compute/sensitive/docs/manage-project.md  | 30 ++++++++++------
 .../sensitive/docs/migration-from-muni.md     |  6 ++--
 .../managed/docs/network/secure-vpn/index.md  |  8 +++--
 .../managed/docs/portals/binderhub/index.md   | 11 ++++--
 topics/managed/docs/portals/index.md          |  2 +-
 .../managed/docs/portals/jupyterhub/index.md  | 27 +++++++++------
 .../managed/docs/workflow-execution/teswes.md |  5 ++-
 74 files changed, 297 insertions(+), 236 deletions(-)

diff --git a/.spelling b/.spelling
index d33f6426..9bb27fcb 100644
--- a/.spelling
+++ b/.spelling
@@ -1,3 +1,12 @@
+KYPO
+Cerit-SC
+WS-PGRADE
+DIRAC
+ppk
+sshuttle
+pip
+TESK
+WES
 NVIDIA DGX-2
 nvidia
 smi
@@ -812,3 +821,4 @@ PROJECT
 e-INFRA
 e-INFRA CZ
 DICE
+TOTP
diff --git a/README.md b/README.md
index 1da1725e..f2c78d68 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,4 @@
-# User documentation
+# User Documentation
 
 This project contains e-INFRA CZ documentation portal source.
 
@@ -22,7 +22,7 @@ $ sudo ./start.sh
 
 the `-b` parameter builds the container with required dependencies.
 
-### Package upgrade with pip
+### Package Upgrade With pip
 
 ```console
 $ pip list -o
@@ -50,7 +50,7 @@ Mellanox
 
 ## Mathematical Formulae
 
-### Formulas are made with:
+### Formulas Are Made With:
 
 * [https://facelessuser.github.io/pymdown-extensions/extensions/arithmatex/](https://facelessuser.github.io/pymdown-extensions/extensions/arithmatex/)
 * [https://www.mathjax.org/](https://www.mathjax.org/)
diff --git a/main/compute/index.md b/main/compute/index.md
index 2828a972..88bcf60e 100644
--- a/main/compute/index.md
+++ b/main/compute/index.md
@@ -3,7 +3,7 @@ hide:
   - toc
 ---
 
-# Computing services
+# Computing Services
 
 e-INFRA CZ provides a wide range of computational services for the scientific community. In the following sections of the documentation you will learn how to choose the right service and how to make the most of it.
 
@@ -43,12 +43,12 @@ e-INFRA CZ provides a wide range of computational services for the scientific co
 
 </div>
 
-## How to choose between computing services?
+## How to Choose Between Computing Services?
 
 - [Read computing service overview](./concepts/)
 - Check comparision between computing services _(TODO)_
 
-## See how different scientific use-cases are implemented.
+## See How Different Scientific Use-Cases Are Implemented.
 
 - [Cybersecurity platform](./concepts/use-cases/muni-kypo)
 - Sensitive data obtaining to processing on cloud infrastructure _(TODO)_
diff --git a/main/index.md b/main/index.md
index 779afd80..3689a159 100644
--- a/main/index.md
+++ b/main/index.md
@@ -6,7 +6,7 @@ hide:
   - breadcrumbs
   - contributors
 ---
-# Welcome to the e-INFRA CZ documentation!
+# Welcome to e-INFRA CZ Documentation!
 
 The home for documentation of all e-INFRA CZ services that are provided to scientific community in the Czech Republic.
 
diff --git a/topics/about-us/docs/contribute/contributing/set-up-and-work-locally.md b/topics/about-us/docs/contribute/contributing/set-up-and-work-locally.md
index 8795cd2b..a33abebc 100644
--- a/topics/about-us/docs/contribute/contributing/set-up-and-work-locally.md
+++ b/topics/about-us/docs/contribute/contributing/set-up-and-work-locally.md
@@ -1,4 +1,4 @@
-# Set up and work locally
+# Set Up and Work Locally
 
 One of the mechanisms to work with the documentation is to run it on your computer using Docker. This allows you to work offline and see the documentation rendered in a web browser.
 
@@ -40,7 +40,7 @@ By default the URL where the server listens is [http://localhost:8080][4]
 !!! note
     Edits will be shown live in your browser window, no need to restart the server.
 
-### Partial documentation building
+### Partial Documentation Building
 
 If you don't want to build the whole documentation (due to its big build time), you can choose to build only subset of the whole documentation site by using argument `-f <path to mkdocs.yml of subdocumentation>`
 
@@ -48,7 +48,7 @@ If you don't want to build the whole documentation (due to its big build time),
 ./start.sh -f topics/about-us/mkdocs.yml
 ```
 
-## Publishing changes
+## Publishing Changes
 
 Now you are ready to send changes to your forked repository of the e-INFRA CZ documentation.
 
diff --git a/topics/about-us/docs/contribute/contributing/work-within-gitlab-ui.md b/topics/about-us/docs/contribute/contributing/work-within-gitlab-ui.md
index 50ffae71..c0335607 100644
--- a/topics/about-us/docs/contribute/contributing/work-within-gitlab-ui.md
+++ b/topics/about-us/docs/contribute/contributing/work-within-gitlab-ui.md
@@ -1,12 +1,11 @@
-# Contribute within Gitlab GUI
+# Contribute Within Gitlab GUI
 
 This option is suitable for less extensive contribution,
 e.g. a section or a subsection of an already existing page.
 
 In this case, simply:
 
-1. click the **Edit this page**
-under the Table of Content on the right side of the respective page;
+1. click the **Edit this page** under the Table of Content on the right side of the respective page;
 1. make the changes;
 1. create a merge request.
 
diff --git a/topics/about-us/docs/contribute/index.md b/topics/about-us/docs/contribute/index.md
index 78e499bb..688e0a1a 100644
--- a/topics/about-us/docs/contribute/index.md
+++ b/topics/about-us/docs/contribute/index.md
@@ -4,7 +4,7 @@ hide:
 authors:
   - rosinec
 ---
-# Documentation overview
+# Documentation Overview
 
 This section is about how to contribute to the e&#8209;INFRA&#160;CZ documentation. The guide is intended for service providers but also for users of the documentation and e&#8209;INFRA&#160;CZ services, who can participate in building the documentation and thus help other users.
 
@@ -18,8 +18,8 @@ For service providers, a detailed specification of how the documentation is buil
 
     Anyone can contribute. You can work with web editor or run whole documentation on your PC.
 
-    [:octicons-arrow-right-24: Contribute within web editor](../contributing/work-within-gitlab-ui)   
-    [:octicons-arrow-right-24: Contribute locally](../contributing/set-up-and-work-localy)      
+    [:octicons-arrow-right-24: Contribute within web editor](../contributing/work-within-gitlab-ui)
+    [:octicons-arrow-right-24: Contribute locally](../contributing/set-up-and-work-localy)
 
 -   :fontawesome-solid-microchip:{ .md .middle } __Technical details__
 
@@ -27,8 +27,8 @@ For service providers, a detailed specification of how the documentation is buil
 
     All technical information about the documentation. Targets e&#8209;INFRA&#160;CZ service providers.
 
-    [:octicons-arrow-right-24: Technical details](../technical-details)   
-    [:octicons-arrow-right-24: Integration of the new service](../technical-details/#integration-of-the-new-service)   
+    [:octicons-arrow-right-24: Technical details](../technical-details)
+    [:octicons-arrow-right-24: Integration of the new service](../technical-details/#integration-of-the-new-service)
 
 -   :fontawesome-solid-atom:{ .md .middle } __Language of the documentation__
 
@@ -36,8 +36,7 @@ For service providers, a detailed specification of how the documentation is buil
 
     How to write documentation to make it useful, and what elements are available.
 
-    [:octicons-arrow-right-24: Writing practices](../style-guide/writing-practices)      
-    [:octicons-arrow-right-24: Style](../style-guide/style)   
+    [:octicons-arrow-right-24: Writing practices](../style-guide/writing-practices)
+    [:octicons-arrow-right-24: Style](../style-guide/style)
 
-
-</div>
\ No newline at end of file
+</div>
diff --git a/topics/about-us/docs/contribute/style-guide/mkdocs-101.md b/topics/about-us/docs/contribute/style-guide/mkdocs-101.md
index 12bda3b1..6865e3cf 100644
--- a/topics/about-us/docs/contribute/style-guide/mkdocs-101.md
+++ b/topics/about-us/docs/contribute/style-guide/mkdocs-101.md
@@ -62,7 +62,7 @@ Aligning columns to left/center/right is done by placing `:` characters at the b
 | Content      |     Content    |      Content |
 ```
 
-## Code blocks
+## Code Blocks
 
 Starts and ends with three backticks `\``.
 You can specify the environment/language (console in the example below) at the beginning. But not necessary.
diff --git a/topics/about-us/docs/contribute/style-guide/style.md b/topics/about-us/docs/contribute/style-guide/style.md
index 0ade88c8..9a919731 100644
--- a/topics/about-us/docs/contribute/style-guide/style.md
+++ b/topics/about-us/docs/contribute/style-guide/style.md
@@ -1,4 +1,4 @@
-# Documentation style
+# Documentation Style
 
 This section focuses on the best practise of usage of various components (code blocks, notes, diagrams, ...) used within the documentation.
 
@@ -61,7 +61,7 @@ authors
 ---
 ```
 
-## Hiding breadcrumbs
+## Hiding Breadcrumbs
 
 It is possible to hide __breadcrumbs__ by adding to metadata:
 ``` markdown
@@ -69,4 +69,4 @@ It is possible to hide __breadcrumbs__ by adding to metadata:
 hide:
   - breadcrumbs
 ---
-```
\ No newline at end of file
+```
diff --git a/topics/about-us/docs/contribute/style-guide/test-rules.md b/topics/about-us/docs/contribute/style-guide/test-rules.md
index 5e902196..474365ab 100644
--- a/topics/about-us/docs/contribute/style-guide/test-rules.md
+++ b/topics/about-us/docs/contribute/style-guide/test-rules.md
@@ -55,7 +55,7 @@ But this is not allowed:
     - Item 1
     - Item 2
 
-## MD005 - Inconsistent Indentation For List Items at Same Level
+## MD005 - Inconsistent Indentation for List Items at Same Level
 
 This rule is triggered when list items are parsed as being at the same level,
 but don't have the same indentation:
diff --git a/topics/about-us/docs/contribute/style-guide/writing-practices.md b/topics/about-us/docs/contribute/style-guide/writing-practices.md
index 0c400316..00126894 100644
--- a/topics/about-us/docs/contribute/style-guide/writing-practices.md
+++ b/topics/about-us/docs/contribute/style-guide/writing-practices.md
@@ -51,7 +51,7 @@ Below, you can find notes and recommendations on how to contribute to the docume
   * Use descriptive name for an image.
   * Use images in the `png` format.
 
-## Authoring and responsibility
+## Authoring and Responsibility
 
 Each topic has responsible department or person which will be responsible for approving changes.
 
diff --git a/topics/about-us/docs/contribute/technical-details.md b/topics/about-us/docs/contribute/technical-details.md
index bed8d97d..7bb13157 100644
--- a/topics/about-us/docs/contribute/technical-details.md
+++ b/topics/about-us/docs/contribute/technical-details.md
@@ -1,11 +1,13 @@
-# Technical details
+# Technical Details
 
 Technical details for nerds.
 
-## Structure of the documentation
+## Structure of the Documentation
+
 Thanks to the [Monorepo][1] plugin for the mkdocs platform, it is possible to create a documentation structure consisting of sub-topic-related documentation.
 
-The documentation is organized to topics.   
+The documentation is organized to topics.
+
 ```console
 .
 ├── topics
@@ -31,9 +33,10 @@ The documentation is organized to topics.
 
 Each e-INFRA service is represented by it's own topic, therefore folder structure consists of `topics`, `docs`, `mkdocs.yml` file and `e-infra_theme` folder. Which are essential files in the documentation system.
 
-## Documentation configuration - mkdocs.yml
+## Documentation Configuration - mkdocs.yml
 
 The most important part of the child documentation is `mkdocs.yml` file, where the navigation and structure of the documentation is defined. The important options are:
+
 ```yml title="Example of mkdocs.yml"
 site_name: "computing/cloud/openstack" # will be used in URL
 nav:
@@ -44,7 +47,7 @@ nav:
 ## Using Git Submodules
 
 The documentation can be composed from different git repositories thanks to the `git submodules` concept. To add new submodule, use standard `git submodule` commands.
-The resulting file, where submodule should be registered is `.gitsubmodule`. Example of such file can be observed within the documentation project in the root directory (or in following code snippet). 
+The resulting file, where submodule should be registered is `.gitsubmodule`. Example of such file can be observed within the documentation project in the root directory (or in following code snippet).
 
 !!! warning
 
@@ -58,25 +61,24 @@ The resulting file, where submodule should be registered is `.gitsubmodule`. Exa
 
 Use `topics/compute/supercomputing` as current working example of how to use the `git submodules`.
 
-## Integration of the new service
+## Integration of New Service
 
-As service owner you can easily integrate your service documentation within the main repository or use submodule. 
+As service owner you can easily integrate your service documentation within the main repository or use submodule.
 To establish right place for the service documentation, please contact us at support@e-infra.cz.
 
-## Development process
+## Development Process
 
 The development process of the documentation is supported by CI/CD. Each commit to any remote branch will trigger an automatic pipeline that will try deploy changed documentation to the desired location.
 
-Pipeline consists of:   
+Pipeline consists of:
 
-1. Various test, see more at [writing practices page][2]   
-1. Building documentation - runinng `mkdocs build`   
-1. Pushing resulted artifact (final documentation site) to:   
-    1. If changes were pushed to the __main__ branch, the site is deployed to the **docs.e-infra.cz** immidiately.   
-    2. If changes were pushed to the __any other__ branch, the site is deployed to the special URL for review - **docs.e-infra.cz/review/branch_name**   
+1. Various test, see more at [writing practices page][2]
+1. Building documentation - runinng `mkdocs build`
+1. Pushing resulted artifact (final documentation site) to:
+    1. If changes were pushed to the __main__ branch, the site is deployed to the **docs.e-infra.cz** immidiately.
+    2. If changes were pushed to the __any other__ branch, the site is deployed to the special URL for review - **docs.e-infra.cz/review/branch_name**
 
 Please note, the pipeline is being run also 10 minutes after midnight each day to ensure, that submodule components of the documentation are being updated.
 
-
 [1]: https://github.com/backstage/mkdocs-monorepo-plugin
-[2]: ../style-guide/writing-practices
\ No newline at end of file
+[2]: ../style-guide/writing-practices
diff --git a/topics/account/docs/access.md b/topics/account/docs/access.md
index 15294172..aec8cdfc 100644
--- a/topics/account/docs/access.md
+++ b/topics/account/docs/access.md
@@ -66,7 +66,7 @@ credentials (your e-INFRA CZ login and password) and submit the form.
 Alternatively if the service is actually using the first way to log in, this 
 option is called *e-INFRA CZ password*.  
 
-### Non-web Services
+### Non-Web Services
 
 Non-web services are by their nature using different means to authenticate 
 the user and exact description of how to access them should be a part of 
diff --git a/topics/account/docs/mfa/index.md b/topics/account/docs/mfa/index.md
index 35902557..021cfc7c 100644
--- a/topics/account/docs/mfa/index.md
+++ b/topics/account/docs/mfa/index.md
@@ -118,13 +118,13 @@ can be used.
 The [Touch ID](https://support.apple.com/en-in/guide/mac-help/mchl16fbf90a/mac) 
 feature can be used.
 
-#### WebAuthN on Linux PC with FIDO2-compatible Hardware Token
+#### WebAuthN on Linux PC With FIDO2-compatible Hardware Token
 
 USB hardware tokens that support [FIDO2](https://en.wikipedia.org/wiki/FIDO2_Project)
 , like [Yubikey](https://www.yubico.com/authentication-standards/fido2/), 
 can be used.
 
-#### WebAuthN on Linux PC with Android Phone Used for Second Factor
+#### WebAuthN on Linux PC With Android Phone Used for Second Factor
 
 This use case requires a rather specific setup. The Linux PC must have 
 Bluetooth enabled, Google Chrome browser must be used on the PC, and an 
diff --git a/topics/account/docs/mfa/perform.md b/topics/account/docs/mfa/perform.md
index 709fbb76..e8bf5818 100644
--- a/topics/account/docs/mfa/perform.md
+++ b/topics/account/docs/mfa/perform.md
@@ -1,6 +1,6 @@
 # Perform MFA
 
-## When is MFA Required?
+## When Is MFA Required?
 
 MFA is required at login if **the service you are accessing requires it**.
 This is usually true for the services which works with sensitive data or
@@ -31,7 +31,7 @@ choose any of the options.
 You will be displayed an error message if you cannot fulfil MFA
 requirement - e.g. you don't have any registered tokens or verification fails.
 
-### MFA from Your Home Organization Login
+### MFA From Your Home Organization Login
 
 If you perform MFA during the login process within the context of your home
 organization account and information about it is released to us by the identity
diff --git a/topics/compute/concepts/docs/comparison.md b/topics/compute/concepts/docs/comparison.md
index df869cfe..24d9ca97 100644
--- a/topics/compute/concepts/docs/comparison.md
+++ b/topics/compute/concepts/docs/comparison.md
@@ -1,6 +1,6 @@
-# Comparing traditional HPC environemnt to the cloud services
+# Comparing Traditional HPC Environemnt to Cloud Services
 
-## Which MetaCentrum cloud do I need?
+## Which MetaCentrum Cloud Do I Need?
 
 There are multiple MetaCentrum cloud environments:
 
@@ -11,5 +11,5 @@ There are multiple MetaCentrum cloud environments:
 Each of them is targetting different use-cases. General rules of thumb are:
 
 1. You need to use MetaCentrum Sensitive if your application deals with sensitive data.
-1. You want to consider MetaCentrum Kubernetes if your application is already containerized and does not require any special environment (such as VM isolation, networking separation or specific networking configuration, ...).
+1. You want to consider MetaCentrum Kubernetes if your application is already containerized and does not require any special environment (such as VM isolation, networking separation or specific networking configuration, etc.).
 1. Otherwise look into MetaCentrum OpenStack.
diff --git a/topics/compute/concepts/docs/hw.md b/topics/compute/concepts/docs/hw.md
index 7f5da155..38d7d23c 100644
--- a/topics/compute/concepts/docs/hw.md
+++ b/topics/compute/concepts/docs/hw.md
@@ -8,7 +8,7 @@ aside:
 sidebar:
   nav: docs
 ---
-## Kubernetes clusters
+## Kubernetes Clusters
 
 ### kuba-cluster
 
@@ -22,7 +22,8 @@ This cluster comprises 2560 *hyperthreaded* CPU cores, 2530 available to users,
 | GPU:      | none or 1 or 2 NVIDIA A40 per node  |
 | Network:  | 2x 10Gbps NIC                       |
 
-## OpenStack clusters
+## OpenStack Clusters
+
 _TBD_
 
 ## Storage
diff --git a/topics/compute/concepts/docs/index.md b/topics/compute/concepts/docs/index.md
index 87ccc891..09349f74 100644
--- a/topics/compute/concepts/docs/index.md
+++ b/topics/compute/concepts/docs/index.md
@@ -4,7 +4,7 @@ hide:
   - toc
 ---
 
-# Computing services
+# Computing Services
 
 e-INFRA CZ provides a wide range of computational services for the scientific community. In the following sections of the documentation you will learn how to choose the right service and how to make the most of it.
 
@@ -44,12 +44,12 @@ e-INFRA CZ provides a wide range of computational services for the scientific co
 
 </div>
 
-## How to choose between computing services?
+## How to Choose Between Computing Services?
 
 - [Read computing service overview](./concepts/)
 - Check comparision between computing services _(TODO)_
 
-## See how different scientific use-cases are implemented.
+## See How Different Scientific Use-Cases Are Implemented.
 
 - [Cybersecurity platform](./concepts/use-cases/muni-kypo)
 - Sensitive data obtaining to processing on cloud infrastructure _(TODO)_
diff --git a/topics/compute/concepts/docs/use-cases/mmci-muni-bbmri.md b/topics/compute/concepts/docs/use-cases/mmci-muni-bbmri.md
index 4f0ffcd7..05f7fb2f 100644
--- a/topics/compute/concepts/docs/use-cases/mmci-muni-bbmri.md
+++ b/topics/compute/concepts/docs/use-cases/mmci-muni-bbmri.md
@@ -27,8 +27,7 @@ Once the huge images in Mirax format enter the raw data storage, they are transf
 
 There is an directory structure optimized within the data storage enabling imediate usage of the TIFF files by the pathologist via Hospital Information System (HIS).
 
-
-## Target cloud mapping
+## Target Cloud Mapping
 
 TODO
 
@@ -37,8 +36,8 @@ TODO
 Conversion from Mirax proprietary format to TIFF open format is done by using open-source convertor [Vips](https://libvips.github.io/pyvips/intro.html).
 Virtual server visualizing the TIFF images uses open-source [OpenSeadragon project](https://openseadragon.github.io/).
 
-
 ## References
+
 * https://www.bbmri.cz/
 * https://www.mou.cz/en/about-the-mmci/t1632
 * https://libvips.github.io/pyvips/intro.html
diff --git a/topics/compute/concepts/docs/use-cases/muni-kypo.md b/topics/compute/concepts/docs/use-cases/muni-kypo.md
index ad3fe27f..98acbc59 100644
--- a/topics/compute/concepts/docs/use-cases/muni-kypo.md
+++ b/topics/compute/concepts/docs/use-cases/muni-kypo.md
@@ -9,7 +9,7 @@ sidebar:
 
 ## Goals
 
-1. To research methods and develop software for enhancing cybersecurity knowledge and skills. 
+1. To research methods and develop software for enhancing cybersecurity knowledge and skills.
 1. To provide tools for economically-and-time efficient simulation of real Critical Information Infrastructures (CIIs), detecting of cyber-threats, and then mitigation.
 1. Support for research and development of new methods to protect critical infrastructure against attacks.
 
@@ -17,7 +17,7 @@ sidebar:
 
 The architecture consists of several components briefly described [here](https://docs.crp.kypo.muni.cz/basic-concepts/conceptual-architecture/).
 
-## Target cloud mapping
+## Target Cloud Mapping
 
 As the architecture requires dynamic creation of following resources
 
@@ -36,12 +36,11 @@ KYPO CRP uses the same open approach for the content as for its architecture to
 
 The KYPO CRP sources are currently hosted at https://gitlab.ics.muni.cz/muni-kypo-crp.
 
-
 ## References
+
 * https://crp.kypo.muni.cz/
 * https://docs.crp.kypo.muni.cz/
 * https://www.muni.cz/en/research/projects/48647
 * https://www.muni.cz/en/research/projects/31984
 * https://www.muni.cz/en/research/projects/23884
 * https://www.muni.cz/en/research/projects/43025
-
diff --git a/topics/compute/grid/docs/index.md b/topics/compute/grid/docs/index.md
index 1cb4dbe1..5bc7c806 100644
--- a/topics/compute/grid/docs/index.md
+++ b/topics/compute/grid/docs/index.md
@@ -3,6 +3,6 @@ hide:
   - toc
 ---
 
-# e-INFRA CZ Grid computing
+# e-INFRA CZ Grid Computing
 
 Metacentrum Grid documentation is currently located at [https://wiki.metacentrum.cz/wiki/Categorized_list_of_topics](https://wiki.metacentrum.cz/wiki/Categorized_list_of_topics)
diff --git a/topics/compute/kubernetes/docs/apps/owncloud/index.md b/topics/compute/kubernetes/docs/apps/owncloud/index.md
index 0acf5239..44ab69ea 100644
--- a/topics/compute/kubernetes/docs/apps/owncloud/index.md
+++ b/topics/compute/kubernetes/docs/apps/owncloud/index.md
@@ -20,7 +20,7 @@ Following the steps below, you can run ownCloud client application. This applica
 * Important prerequisite: application pod needs to use [PVC](pvc.html) in order to ownCloud client can sync data into the pod. If no PVC is used, no sync is possible.
   * For most applications here it is possible to select *persistent home* which implies using *PVC*.
 
-### Check your Application
+### Check Your Application
 
 Select your `Namespace` (1),  navigate through `Workload` (2), `Pods` (3), and name of the application (4), e.g., `ansys-0` -- click on the name. See screenshot below.
 
@@ -42,13 +42,13 @@ Navigate through `App & Marketplace` (2), `Charts` (3), limit charts only to `ce
 
 ![selectapp](selectapp.png)
 
-### Select Version of the Application
+### Select Version of Application
 
 When you click on the chart, hit `Install` to continue.
 
 ![selectversion](selectversion.png)
 
-### Install the Application
+### Install Application
 
 Now you can install the ownCloud application. In most cases, keep both `Namespace` (1) and `Name` (2) intact, however, you can select namespace as desired except `default`. The `default` namespace is available but it is not meant to be used. The `Name` will be in URL to access the application. The `Name` must be unique in the `Namespace`, i.e., you cannot run two or more instances with the same `Name` in the same `Namespace`. If you delete the application and later install the application again preserving its `Name`, content of home directory will be preserved. 
 
diff --git a/topics/compute/kubernetes/docs/get-started/hello-world/index.md b/topics/compute/kubernetes/docs/get-started/hello-world/index.md
index 1c3f1a46..5df9a4f3 100644
--- a/topics/compute/kubernetes/docs/get-started/hello-world/index.md
+++ b/topics/compute/kubernetes/docs/get-started/hello-world/index.md
@@ -1,20 +1,22 @@
-# Hello world tutorial
+# Hello World Tutorial
 
 Here we provide a short tutorial on how to deploy a custom webserver in Kubernetes with `kubectl`. We shall use already existing example from [hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes) but with a bit more explanation. This tutorial by far doesn't include everything that can be configured and done but rather provides first experience with Kubernetes.
 
 We are going to deploy a simple web that runs from Docker image and displays "Hello world" together with `Pod` name and `node OS` information. 
 
-*IMPORTANT*
-Unless agreed beforehand, for personal projects and experiments you can use `kuba-cluster`. Here, you have to work in your namespace and its name is derived from your last name with added `-ns`. However, names are not unique and therefore we recommend to check yours on `Rancher` in the drop-down menu in the upper left corner `kuba-cluster` and `Project/Namespaces`.
+!!! important
+    Unless agreed beforehand, for personal projects and experiments you can use `kuba-cluster`. Here, you have to work in your namespace  and its name is derived from your last name with added `-ns`. However, names are not unique and therefore we recommend to check yours   on `Rancher` in the drop-down menu in the upper left corner `kuba-cluster` and `Project/Namespaces`.
 
 ![kube ns](ns.jpg)
 
-## Create files
+## Create Files
 
 We have to create at least 3 Kubernetes resources to deploy the app -- `Deployment`, `Service`, `Ingress`.
 
 ### 1. Deployment
+
 Create new directory, e.g. `hellok` and inside, create new file `deployment.yaml `with content:
+
 ```yaml
 apiVersion: apps/v1
 kind: Deployment
@@ -38,6 +40,7 @@ spec:
         ports:
         - containerPort: 8080
 ```
+
 This example file is composed of fields:
 - `.metadata` 
   - `.name` denotes deployment's name
@@ -53,6 +56,7 @@ This example file is composed of fields:
 Complete reference docs for resources and their allowed fields and subfields is available [online](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/). Don't forget right indentation! 
 
 ### 2. Service
+
 Secondly, we have to create `Service` which is abstract way to expose an application as a network service.
 
 ```yaml
@@ -78,6 +82,7 @@ A Service can map any incoming port to a targetPort. By default, the targetPort
 Lastly, we have to create `Ingress` which exposes HTTP and HTTPS routes from outside world to the cluster world. Traffic is controled by rules set in the resource.
 It is possible to expose your deployments in [2 ways](/docs/kubectl-expose.html) but here we will use cluster LoadBalancer with creation of just new DNS name.
 You can use whatever name you want but it has to fullfill 2 requirements:
+
 - name is composed only from letters, numbers and '-'
 - name ends with `.dyn.cloud.e-infra.cz`
 
@@ -110,6 +115,7 @@ spec:
 ```
 
 This example file is composed of fields:
+
 - `.metadata` 
   - `.name` denotes name
   - `.annotations` ingress frequently uses annotations to configure options depending on ingress controller. We use `nginx` controller and possible annotations are listed [here](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/). The ones used here are necessary for right functionality and they automatically create TLS ceritificate therefore you don't need to worry about HTTPS - it's provided automatically
@@ -122,7 +128,9 @@ This example file is composed of fields:
     - `paths` (for example, `/testpath`), each of which has an associated backend defined with a `service.name` and a `service.port.name` or `service.port.number`. `service.port.number` is the port which is exposed by the service therefore in service denoted as `spec.ports.port`, similarly `service.ports.[i].name` is equivalent to `spec.ports.[i].name`. Path type can be specified, more about it [here](https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types)
 
 ### 4. Create
-Now, create all resources with using whole directory as an argument and specify your namespace 
+
+Now, create all resources with using whole directory as an argument and specify your namespace
+
 ```bash
 kubectl apply -f hello-world -n [namespace]                     
 deployment.apps/hello-kubernetes created
@@ -133,7 +141,8 @@ service/hello-kubernetes-svc created
 You can check status of deplyed resources with `kubectl get pods/services/ingress -n [namespace]` and when all of them are up and running, you can access the URL and you will be presented with sample page.
 ![hello](hello.png)
 
-## Further customization
+## Further Customization
+
 You can specify various fields in every resource's file, many of them not used here. One of more wanted features is passing environment variables into `Deployments` in case spawned containers need some. We will use one environment variable in our deployment to change displayed message. At the end, add new section `env` which will forward the value into the pod. Then, run again `kubectl apply -f hello-world -n [namespace]` to apply changes. When you access the website now, new message is displayed!
 
 ```yaml
@@ -173,9 +182,11 @@ Other customization can include:
 
 
 ## Creating PersistentVolumeClaim
+
 If you need to use some persistent storage, you can demand a NFS volume and mount it in `Deployment`. 
 
 Example: create file `claim.yaml` with content
+
 ```yaml
 apiVersion: v1                                                                                                                                                                                              
 kind: PersistentVolumeClaim                                                     
@@ -189,7 +200,9 @@ spec:
       storage: 1Gi                                                              
   storageClassName: nfs-csi
 ```
+
 The `spec.resources.requests` field has to be specified but doesn't really mean anything. Then perform `kubectl apply -f claim.yaml -n [namespace]`. You can check if everything went fine by running
+
 ```bash
 kubectl get pvc -n [namespace]
 NAME                                                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
@@ -228,6 +241,7 @@ spec:
 ```
 
 ## Pod Security Policy
+
 For security reasons, not everything is allowed in `kuba-cluster`. 
 
 List of (dis)allowed actions:
@@ -238,20 +252,25 @@ List of (dis)allowed actions:
 - Volumes: can mount `configMap, emptyDir, projected, secret, downwardAPI, persistentVolumeClaim`
 
 Any deployment that will attempt to run as root won't be created and will persist in state similar to (notice READY 0/3 and AVAILABLE 0, logs and describe would tell more)
+
 ```bash
 NAME               READY   UP-TO-DATE   AVAILABLE   AGE
 hello-kubernetes   0/3     3            0           7m8s
 ```
 
-## Kubectl command
+## Kubectl Command
+
 There are many useful `kubectl` commands that can be used to verify status of deployed resources or get information about them. To list some of the most handy:
 - `kubectl get [resource]` provides basic information about resource e.g. if we query service, we can see IP address
+
 ```bash
  kubectl get service hello-kubernetes-svc -n [namespace]
 NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)        AGE
 hello-kubernetes-svc   LoadBalancer   10.43.124.251   147.251.253.243   80:31334/TCP   3h23m
 ```
+
 - `kubectl describe [resource]` offers detailed information about resource (output is heavily trimmed)
+
 ```bash
 kubectl describe pod hello-kubernetes -n test-ns
 Name:         hello-kubernetes-5547c96ddc-4hxnf
diff --git a/topics/compute/kubernetes/docs/get-started/index.md b/topics/compute/kubernetes/docs/get-started/index.md
index 66475f45..4306328a 100644
--- a/topics/compute/kubernetes/docs/get-started/index.md
+++ b/topics/compute/kubernetes/docs/get-started/index.md
@@ -1 +1 @@
-# Getting started tutorials with Kubernetes
\ No newline at end of file
+# Getting Started Tutorials With Kubernetes
diff --git a/topics/compute/kubernetes/docs/gitops/container_build/building_containers.md b/topics/compute/kubernetes/docs/gitops/container_build/building_containers.md
index 2daaf944..76eb8f93 100644
--- a/topics/compute/kubernetes/docs/gitops/container_build/building_containers.md
+++ b/topics/compute/kubernetes/docs/gitops/container_build/building_containers.md
@@ -12,14 +12,16 @@ sidebar:
 This tutorial shows how to set up the basic GitLab pipeline to automaticaly build Docker image and publish it to a container registry.
 
 ## Prerequisites
+
 - Gitlab repository
 - Dockerfile in the repository with desired container configuration
 
-## Configure CI/CD Gitlab Pipeline file
-- In your desired repository, use **Set up CI/CD** button as shown in the following image. 
+## Configure CI/CD Gitlab Pipeline File
+
+- In your desired repository, use **Set up CI/CD** button as shown in the following image.
 ![setup](./gitops/container_build/containers_1.png)
 
-- In the `.gitlab-ci.yml` editor copy and paste following snippet:    
+- In the `.gitlab-ci.yml` editor copy and paste following snippet:
 
 ```yaml
 docker-build:
@@ -45,9 +47,10 @@ docker-build:
       exists:
         - Dockerfile
 ```
+
 This will use the standard Gitlab Container Registry available within your own project. Note the variable `$CI_REGISTRY`, which is specific for the repository.
 
-- Commit changes 
+- Commit changes
 ![commit](./gitops/container_build/containers_2.png)
 
 - Pipeline will be triggered and it's status could be shown within the last commit information.
@@ -61,11 +64,14 @@ The `Packages & Registries > Container registry` is available here:
 
 For advanced information please refer to the [official documentation](https://docs.gitlab.com/ee/user/packages/container_registry/#container-registry-examples-with-gitlab-cicd)
 
-## How to use images from Gitlab Registry
+## How to Use Images From GitLab Registry
+
 If the project's visibility within GitLab is public and container registry is not limited to authenticated users, simply use:
+
 ```bash
 docker run [options] registry.example.com/group/project/image [arguments]
 ```
+
 > If you are using MUNI ICS GitLab, the registry URL is: registry.gitlab.ics.muni.cz
 
 If project visibility or container registry are set to private, authentication to container registry is needed. You will need to create deploy token and use it as descibed in the [official documentation](https://docs.gitlab.com/ee/user/packages/container_registry/#authenticate-with-the-container-registry)
@@ -73,7 +79,9 @@ If project visibility or container registry are set to private, authentication t
 For more information please refer to [official documentation](https://docs.gitlab.com/ee/user/packages/container_registry/#use-images-from-the-container-registry).
 
 ## Upload image to custom container registry
-Change the options of `docker login` command in `before_script` part of your definition of pipeline (`gitlab-ci.yml`). 
+
+Change the options of `docker login` command in `before_script` part of your definition of pipeline (`gitlab-ci.yml`).
+
 ```yaml
   before_script:
     - docker login -u "username" -p "password" example.io
diff --git a/topics/compute/kubernetes/docs/gitops/git_agent/agentk.md b/topics/compute/kubernetes/docs/gitops/git_agent/agentk.md
index 35743278..4a534ab9 100644
--- a/topics/compute/kubernetes/docs/gitops/git_agent/agentk.md
+++ b/topics/compute/kubernetes/docs/gitops/git_agent/agentk.md
@@ -14,12 +14,13 @@ The following text describes how to install GitLab Kubernetes Agent step by step
 Following the steps should leave you with functional agent and knowledge of making manifest files.
 
 ## Prerequisites
+
 - Namespace on your cluster
 - Gitlab repository
 - kubectl
 
 
-## Define a configuration repository
+## Define Configuration Repository
 
 In your desired repository, add the agent configuration file: `.gitlab/agents/<agent-name>/config.yaml`
 
@@ -37,7 +38,7 @@ gitops:
 **Note**: `<Your Project ID>` can be replaced by your project path.
 
 
-## Connect to cluster
+## Connect to Cluster
 
 - Register agent and get agent token.
   
@@ -79,11 +80,10 @@ gitops:
 
 - Check if the agent is running. Either in rancher or using kubectl `kubectl get pods -n <Your Namespace>`
 
-## Manage deployments
+## Manage Deployments
 
 - In your repository make manifest file: `/manifest/manifest.yaml`
  
- 
 For the purpose of testing the agent, we will make simple manifest file that will create ConfigMap in `<Your Namespace>`.
 
 ```yaml
diff --git a/topics/compute/openstack/docs/additional-information/about.md b/topics/compute/openstack/docs/additional-information/about.md
index bbb8ea2b..66c29084 100644
--- a/topics/compute/openstack/docs/additional-information/about.md
+++ b/topics/compute/openstack/docs/additional-information/about.md
@@ -7,6 +7,7 @@ search:
 # About MetaCentrum Cloud
 
 ## Hardware
+
 MetaCentrum Cloud consists of 17 computational clusters containing 277 hypervisors
 with a sum of 8968 cores, 96 GPU cards, and 178 TB RAM. Special demand applications
 can utilize our clusters with local SSDs and GPU cards. OpenStack instances, object
@@ -20,12 +21,13 @@ and one of the top 3 most active open source projects in the world. New OpenStac
 released twice a year. OpenStack functionality is separated into more than 50 services.
 
 ## Application
+
 More than 400 users are using the MetaCentrum Cloud platform and more than 130k VMs were started last year.
 
-## MetaCentrum Cloud current release
+## MetaCentrum Cloud Current Release
 
 [OpenStack Train](https://www.openstack.org/software/train/)
 
-## Deployed services
+## Deployed Services
 
 The list of deployed services in MetaCentrum Cloud is available in [Technical reference](../technical-reference/openstack-modules.md).
diff --git a/topics/compute/openstack/docs/additional-information/concepts-of-cloud-computing.md b/topics/compute/openstack/docs/additional-information/concepts-of-cloud-computing.md
index 4db6d84a..87fee892 100644
--- a/topics/compute/openstack/docs/additional-information/concepts-of-cloud-computing.md
+++ b/topics/compute/openstack/docs/additional-information/concepts-of-cloud-computing.md
@@ -5,4 +5,4 @@ search:
   exclude: false
 ---
 
-# Concepts of cloud computing
+# Concepts of Cloud Computing
diff --git a/topics/compute/openstack/docs/additional-information/custom-images.md b/topics/compute/openstack/docs/additional-information/custom-images.md
index fdf643e0..cf58ecef 100644
--- a/topics/compute/openstack/docs/additional-information/custom-images.md
+++ b/topics/compute/openstack/docs/additional-information/custom-images.md
@@ -5,12 +5,12 @@ search:
   exclude: false
 ---
 
-# Custom images
+# Custom Images
 
 We don't support uploading personal images by default. MetaCentrum Cloud images are optimized for running in the cloud and we recommend users
 customize them instead of building their own images from scratch. If you need to upload a custom image, please contact user support at cloud@metacentrum.cz for appropriate permissions.
 
-## Image upload
+## Image Upload
 
 Instructions for uploading a custom image:
 
@@ -42,7 +42,8 @@ os_distro=ubuntu # example
 
 For a more detailed explanation about CLI work with images, please refer to [official documentation](https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/image.html).
 
-## Image visibility
+## Image Visibility
+
 In OpenStack there are 4 possible visibilities of a particular image:  **public, private, shared, community**.
 
 You can view these images via **CLI** or in **dashboard**.
@@ -51,20 +52,19 @@ In **dashboard** visit section *Images* and then you can search via listed image
 
 ![](/compute/openstack/images/image_visibility.png)
 
-
-### Public images
+### Public Images
 
 **Public image** is an image visible and readable to everyone. Only OpenStack admins can modify them.
 
-### Private images
+### Private Images
 
 **Private image** is an image visible only to the owner of that image. This is the default setting for all newly created images.
 
-### Shared images
+### Shared Images
 
 **Shared image** is an image visible only to the owner and possibly certain groups that the owner specified. How to share an image between projects, please read the following [tutorial](#image-sharing-between-projects) below. Image owners are responsible for managing shared images.
 
-### Community images
+### Community Images
 
 **Community image** is an image that is accessible to everyone. Image owners are responsible for managing community images.
 Community images are visible in the dashboard using `Visibility: Community` query. These images can be listed via CLI command:
@@ -86,17 +86,17 @@ Creating a new **Community image** can look like this:
 openstack image create --file test-cirros.raw --property hw_scsi_model=virtio-scsi --property hw_disk_bus=scsi --property hw_rng_model=virtio --property hw_qemu_guest_agent=yes --property os_require_quiesce=yes --property os_type=linux --community test-cirros
 ```
 
-
 !!! note
 
     References to existing community images should use `<image-id>` instead of `<image-name>`. See [image visibility](https://wiki.openstack.org/wiki/Glance-v2-community-image-visibility-design) document for more details.
 
 
-## Image sharing between projects
+## Image Sharing Between Projects
 
 There are two ways sharing an OpenStack Glance image among projects, using `shared` or `community` image visibility.
 
-### Shared image approach
+### Shared Image Approach
+
 Image sharing allows you to share your image between different projects and then it is possible to launch instances from that image in those projects with other collaborators etc. As mentioned in a section about CLI, you will need to use your OpenStack credentials from ```openrc``` or ```cloud.yaml``` file.
 
 Then to share an image you need to know its ID, which you can find with the command:
@@ -150,7 +150,7 @@ When you find ```<ID_project_to_unshare>``` of project, you can cancel the acces
 openstack image remove project <image ID> <ID_project_to_unshare>
 ```
 
-### Community image approach
+### Community Image Approach
 
 This approach is very simple:
 
diff --git a/topics/compute/openstack/docs/additional-information/gpu-computing.md b/topics/compute/openstack/docs/additional-information/gpu-computing.md
index 05efecd9..bd81b12c 100644
--- a/topics/compute/openstack/docs/additional-information/gpu-computing.md
+++ b/topics/compute/openstack/docs/additional-information/gpu-computing.md
@@ -5,7 +5,7 @@ search:
   exclude: false
 ---
 
-# GPU computing
+# GPU Computing
 
 On this page you can find static list of offered GPUs in Metacentrum Cloud.
 
diff --git a/topics/compute/openstack/docs/additional-information/ip-allocation-policy.md b/topics/compute/openstack/docs/additional-information/ip-allocation-policy.md
index a4d16d53..497ad207 100644
--- a/topics/compute/openstack/docs/additional-information/ip-allocation-policy.md
+++ b/topics/compute/openstack/docs/additional-information/ip-allocation-policy.md
@@ -5,7 +5,7 @@ search:
   exclude: false
 ---
 
-# IP allocation policy
+# IP Allocation Policy
 
 In MetaCentrum Cloud (MCC) we support both IPv4 and IPv6. IPv4 allocation policies are based on Floating IPs (FIP). This type of networking requires the user to first connect virtual network containing specific VM to the public network before allocating a FIP for specific VM. Further information is available on page [Virtual networking](../additional-information/virtual-networking.md). IPv6 allocation policy is based on common IPv6 public network, which can be directly attached to VMs.
 
@@ -40,7 +40,7 @@ The situation is rather different for group projects. You cannot use the same ap
 
     If you use a MUNI account, you can use `private-muni-10-16-116` and log into the network via MUNI VPN or you can set up Proxy networking, which is described on page [Proxy networking](../additional-information/proxy-networking.md).
 
-### Floating IP conversion
+### Floating IP Conversion
 
 One floating IP per project should generally suffice. All OpenStack instances are deployed on top of internal OpenStack networks. These internal networks are not by default accessible from outside of OpenStack, but instances on top of the same internal network can communicate with each other.
 
@@ -58,6 +58,6 @@ In case, that these options are not suitable for your use case, you can still re
 
 ## IPv6 Networking
 
-### IPv6 Shared network
+### IPv6 Shared Network
 
 We have prepared an IPv6 prefix `public-muni-v6-432`, which is available for both personal and group projects. The network is available as an attachable network for VMs with no limits. For more information please refer to page [Attaching interface](../how-to-guides/attaching-interface.md).
diff --git a/topics/compute/openstack/docs/additional-information/ipv6-troubleshooting.md b/topics/compute/openstack/docs/additional-information/ipv6-troubleshooting.md
index 3cb09947..581cf137 100644
--- a/topics/compute/openstack/docs/additional-information/ipv6-troubleshooting.md
+++ b/topics/compute/openstack/docs/additional-information/ipv6-troubleshooting.md
@@ -5,25 +5,25 @@ search:
   exclude: false
 ---
 
-# IPv6 troubleshooting
+# IPv6 Troubleshooting
 
 Public IPv6 addresses are assigned via SLAAC. After assigning an interface in OpenStack to your instance, verify correct configuration of your VM. You can assign interface by directly connecting your VM to the network upon creation or by assigning secondary interface.
 
-## Metadata service
+## Metadata Service
 
 There is an issue with metadata service in IPv6 only environment in our OpenStack Cloud. If you decide to use IPv6 for public access, we recommend to add a local IPv4 network to your VM for deployment of initial configuration via metadata service. This problem can be usually linked to missing ssh keys in your VM in IPv6 only deployment.
 
-## IPv6 address not obtained
+## IPv6 Address Not Obtained
 
 This problem should occur only when assigning additional interfaces to your existing VM. First verify the interface is enabled in the system via `ip addr` and if the interface is down, run `ifconfig ETH_NAME up`.
 
 Some Linux images have SLAAC disabled by default. In this case, you can either assign the address allocated by OpenStack manually, or setup SLAAC configuration on your VM.
 
-## Security groups
+## Security Groups
 
 If you have been using your VM with IPv4, make sure to update your [Security groups](../additional-information/security-groups.md) to also allow IPv6 traffic, otherwise it will be inaccessible. For configuration refer to tutorial [Creating first infrastructure](../getting-started/creating-first-infrastructure.md#update-security-group).
 
-## DNS records
+## DNS Records
 
 By default, OpenStack injects DNS records to new VMs upon creation. If you are missing IPv6 DNS records on your VM and you decide to completely remove IPv4, you should setup IPv6 records in folder `/etc/resolv.conf`.
 
diff --git a/topics/compute/openstack/docs/additional-information/object-storage.md b/topics/compute/openstack/docs/additional-information/object-storage.md
index 6c01d827..bc228ca0 100644
--- a/topics/compute/openstack/docs/additional-information/object-storage.md
+++ b/topics/compute/openstack/docs/additional-information/object-storage.md
@@ -5,7 +5,7 @@ search:
   exclude: false
 ---
 
-## Object storage
+## Object Storage
 
 OpenStack supports object storage based on [OpenStack Swift](https://docs.openstack.org/swift/latest/api/object_api_v1_overview.html). Creation of object storage container (database) is done by clicking on `+Container` on [Object storage containers page](https://dashboard.cloud.muni.cz/project/containers).
 
@@ -17,12 +17,11 @@ Every object typically contains data along with metadata and a unique global ide
 
 In both cases, you will need application credentials to be able to manage your data.
 
-
-### Swift credentials
+### Swift Credentials
 
 The easiest way to generate **Swift** storage credentials is through [MetaCentrum cloud dashboard](https://dashboard.cloud.muni.cz). You can generate application credentials as described [here](../how-to-guides/obtaining-api-key.md). You must have role **heat_stack_owner**.
 
-### S3 credentials
+### S3 Credentials
 
 If you want to use **S3 API** you will need to generate ec2 credentials for access. Note that to generate ec2 credentials you will also need credentials containing the role of **heat_stack_owner**. Once you sourced your credentials for CLI you can generate ec2 credentials by the following command:
 
diff --git a/topics/compute/openstack/docs/additional-information/register.md b/topics/compute/openstack/docs/additional-information/register.md
index dc679947..a8943bbf 100644
--- a/topics/compute/openstack/docs/additional-information/register.md
+++ b/topics/compute/openstack/docs/additional-information/register.md
@@ -67,12 +67,12 @@ request a group project using [this form](https://projects.cloud.muni.cz/) and p
 * __estimated length of the project__,
 * __access control information__ _[(info)](#get-access-control-information)_.
 
+## Increase Quotas for Existing Group Project
 
-## Increase quotas for existing group project
 To request quota increase or access to particular [flavor](../technical-reference/flavors.md), please use [this form](https://projects.cloud.muni.cz/). 
 
+## Get Access Control Information
 
-## Get access control information
 __Access control__ is based on information provided by the selected identity federation
 and is presented in the form of a VO name and, optionally, a group name. Every user
 with active membership in the specified VO/group will have full access to all resources
diff --git a/topics/compute/openstack/docs/additional-information/terms-of-service.md b/topics/compute/openstack/docs/additional-information/terms-of-service.md
index 78d6871a..6f2196e8 100644
--- a/topics/compute/openstack/docs/additional-information/terms-of-service.md
+++ b/topics/compute/openstack/docs/additional-information/terms-of-service.md
@@ -5,7 +5,7 @@ search:
   exclude: false
 ---
 
-# Terms of service
+# Terms of Service
 
 The following documents and rules describe your rights and responsibilities as a user of MetaCentrum Cloud.
 
diff --git a/topics/compute/openstack/docs/additional-information/using-cloud-tools.md b/topics/compute/openstack/docs/additional-information/using-cloud-tools.md
index 37525caa..52ca8317 100644
--- a/topics/compute/openstack/docs/additional-information/using-cloud-tools.md
+++ b/topics/compute/openstack/docs/additional-information/using-cloud-tools.md
@@ -5,11 +5,12 @@ search:
   exclude: false
 ---
 
-# Using Cloud tools
+# Using Cloud Tools
 
 [Cloud tools](https://gitlab.ics.muni.cz/cloud/cloud-tools) is a docker container prepared with modules required for cloud management.
 
 ## Setup
+
 In order to use the container, you have to [install docker](https://docs.docker.com/engine/install/centos/) and start the service.
 The next step is to clone the [cloud tools](https://gitlab.ics.muni.cz/cloud/cloud-tools) repository
 and start the docker container by running:
diff --git a/topics/compute/openstack/docs/additional-information/virtual-networking.md b/topics/compute/openstack/docs/additional-information/virtual-networking.md
index f2151fbe..b184f1a1 100644
--- a/topics/compute/openstack/docs/additional-information/virtual-networking.md
+++ b/topics/compute/openstack/docs/additional-information/virtual-networking.md
@@ -5,7 +5,7 @@ search:
   exclude: false
 ---
 
-# Virtual networking
+# Virtual Networking
 
 MetaCentrum Cloud offers software-defined networking as one of its services. Users can create their own
 networks and subnets, connect them with routers and set up tiered network topologies.
diff --git a/topics/compute/openstack/docs/additional-information/windows.md b/topics/compute/openstack/docs/additional-information/windows.md
index 5fae00ce..2f43ddbc 100644
--- a/topics/compute/openstack/docs/additional-information/windows.md
+++ b/topics/compute/openstack/docs/additional-information/windows.md
@@ -16,12 +16,11 @@ The next step is to create a security group, that will allow access to a port `3
 
 We recommend disabling those accounts, creating new ones to administer Windows instances in any production environment.
 
-
 # Licensing
 
 - We are not currently supporting Windows licensing. License responsibility for Windows is entirely up to the user.
 
-# Advanced users
+# Advanced Users
 
 - You may use all features of [cloudbase-init](https://cloudbase.it/cloudbase-init/) for Windows.
 - Windows Server [hardening guidelines](https://security.uconn.edu/server-hardening-standard-windows/).
diff --git a/topics/compute/openstack/docs/getting-started/creating-first-infrastructure.md b/topics/compute/openstack/docs/getting-started/creating-first-infrastructure.md
index 590d1944..c7d39533 100644
--- a/topics/compute/openstack/docs/getting-started/creating-first-infrastructure.md
+++ b/topics/compute/openstack/docs/getting-started/creating-first-infrastructure.md
@@ -9,7 +9,7 @@ search:
   img[alt=login] { height: 300px; }
 </style>
 
-# Create first instance
+# Create First Instance
 
 The following guide will take you through the steps necessary to start your first virtual machine instance.
 
diff --git a/topics/compute/openstack/docs/getting-started/creating-project.md b/topics/compute/openstack/docs/getting-started/creating-project.md
index bf985ce6..f5173137 100644
--- a/topics/compute/openstack/docs/getting-started/creating-project.md
+++ b/topics/compute/openstack/docs/getting-started/creating-project.md
@@ -5,4 +5,4 @@ search:
   exclude: false
 ---
 
-# Creating group project
+# Creating Group Project
diff --git a/topics/compute/openstack/docs/how-to-guides/accessing-instances.md b/topics/compute/openstack/docs/how-to-guides/accessing-instances.md
index 7ec40bbc..73b2ceeb 100644
--- a/topics/compute/openstack/docs/how-to-guides/accessing-instances.md
+++ b/topics/compute/openstack/docs/how-to-guides/accessing-instances.md
@@ -5,7 +5,7 @@ search:
   exclude: false
 ---
 
-# Accessing instances
+# Accessing Instances
 
 ## Prerequisites
 
@@ -38,9 +38,9 @@ connect to the VM via SSH.
     Before the connection via putty is possible it is first necessary to import
     private ssh key as is explained in [Technical reference](../technical-reference/remote-access.md).
 
-## Default users
+## Default Users
 
-| OS     | login for ssh command |
+| OS     | Login for SSH command |
 |--------|-----------------------|
 | Debian | debian                |
 | Ubuntu | ubuntu                |
diff --git a/topics/compute/openstack/docs/how-to-guides/allocating-floating-ips.md b/topics/compute/openstack/docs/how-to-guides/allocating-floating-ips.md
index 88e2cb25..17f932cf 100644
--- a/topics/compute/openstack/docs/how-to-guides/allocating-floating-ips.md
+++ b/topics/compute/openstack/docs/how-to-guides/allocating-floating-ips.md
@@ -11,7 +11,7 @@ search:
 
 - Created [networking](../how-to-guides/create-networking.md)
 
-## Allocation and assignment of FIP
+## Allocation and Assignment of FIP
 
 Floating IPs are used to assign public IP address to VMs.
 
diff --git a/topics/compute/openstack/docs/how-to-guides/attaching-interface.md b/topics/compute/openstack/docs/how-to-guides/attaching-interface.md
index a7aac6dc..cc8a3e5c 100644
--- a/topics/compute/openstack/docs/how-to-guides/attaching-interface.md
+++ b/topics/compute/openstack/docs/how-to-guides/attaching-interface.md
@@ -5,13 +5,13 @@ search:
   exclude: false
 ---
 
-# Attaching network interface
+# Attaching Network Interface
 
 ## Prerequisites
 
 - Created [instance](../getting-started/creating-first-infrastructure.md).
 
-## Attaching interface
+## Attaching Interface
 
 This guide shows how to attach additional interfaces to running instances. This approach can be used for both IPv4 and IPv6 networks.
 
diff --git a/topics/compute/openstack/docs/how-to-guides/attaching-remote-storage.md b/topics/compute/openstack/docs/how-to-guides/attaching-remote-storage.md
index 84bc96b7..18258419 100644
--- a/topics/compute/openstack/docs/how-to-guides/attaching-remote-storage.md
+++ b/topics/compute/openstack/docs/how-to-guides/attaching-remote-storage.md
@@ -5,4 +5,4 @@ search:
   exclude: true
 ---
 
-# Attaching remote storage
+# Attaching Remote Storage
diff --git a/topics/compute/openstack/docs/how-to-guides/changing-vm-resources.md b/topics/compute/openstack/docs/how-to-guides/changing-vm-resources.md
index a5ece4fa..2654dc98 100644
--- a/topics/compute/openstack/docs/how-to-guides/changing-vm-resources.md
+++ b/topics/compute/openstack/docs/how-to-guides/changing-vm-resources.md
@@ -5,13 +5,13 @@ search:
   exclude: false
 ---
 
-# Changing VM resources
+# Changing VM Resources
 
 ## Prerequisites
 
 - Created [instance](../getting-started/creating-first-infrastructure.md).
 
-## Resizing image
+## Resizing Image
 
 In this guide we will shoud you how to change the VM resources by changing [flavor](../technical-reference/flavors.md).
 
diff --git a/topics/compute/openstack/docs/how-to-guides/create-networking.md b/topics/compute/openstack/docs/how-to-guides/create-networking.md
index 38718947..a9a06815 100644
--- a/topics/compute/openstack/docs/how-to-guides/create-networking.md
+++ b/topics/compute/openstack/docs/how-to-guides/create-networking.md
@@ -5,11 +5,11 @@ search:
   exclude: false
 ---
 
-# Create networking
+# Create Networking
 
 We can create a virtual network in OpenStack for the project, which can be used by multiple VMs and divides the logical topology for each user.
 
-## Network and subnet creation
+## Network and Subnet Creation
 
 === "GUI"
 
@@ -58,7 +58,7 @@ We can create a virtual network in OpenStack for the project, which can be used
     Additional subnet configuration is available in [official CLI documentation](https://docs.openstack.org/python-openstackclient/train/cli/command-objects/subnet.html).
 
 
-## Router creation
+## Router Creation
 
 === "GUI"
 
diff --git a/topics/compute/openstack/docs/how-to-guides/deploying-loadbalancers.md b/topics/compute/openstack/docs/how-to-guides/deploying-loadbalancers.md
index f6e9c53d..c0689f03 100644
--- a/topics/compute/openstack/docs/how-to-guides/deploying-loadbalancers.md
+++ b/topics/compute/openstack/docs/how-to-guides/deploying-loadbalancers.md
@@ -5,11 +5,11 @@ search:
   exclude: false
 ---
 
-# Deploying loadbalancers
+# Deploying Loadbalancers
 
 Load balancers serve as a proxy between virtualised infrastructure and clients in the outside network. This is essential in OpenStack since it can be used in a scenario where the infrastructure dynamically starts new VMs and adds them into the load balancing pool in order to mitigate inaccessibility of services.
 
-## Create loadbalancers
+## Create Loadbalancers
 
 To create a load balancer, first prepare a pool of VMs with operational service you wish to balance to. Next create the load balancer in the same network and assaign the pool as well as listeners on specific ports.
 
@@ -39,7 +39,8 @@ To create a load balancer, first prepare a pool of VMs with operational service
     openstack loadbalancer member create --address vm_ip_address --protocol-port 80 --wait my_pool
     ```
 
-## Delete loadbalancers
+## Delete Loadbalancers
+
 When deleting a loadbalancer, first unassign the floating IP address used by the loadbalancer.
 
 === "CLI"
diff --git a/topics/compute/openstack/docs/how-to-guides/high-availability-deployment.md b/topics/compute/openstack/docs/how-to-guides/high-availability-deployment.md
index 6556f9f6..22022d71 100644
--- a/topics/compute/openstack/docs/how-to-guides/high-availability-deployment.md
+++ b/topics/compute/openstack/docs/how-to-guides/high-availability-deployment.md
@@ -5,4 +5,4 @@ search:
   exclude: false
 ---
 
-# High availability deployment
+# High Availability Deployment
diff --git a/topics/compute/openstack/docs/how-to-guides/maintaining-cloud-resources.md b/topics/compute/openstack/docs/how-to-guides/maintaining-cloud-resources.md
index eb2f29a1..931fa759 100644
--- a/topics/compute/openstack/docs/how-to-guides/maintaining-cloud-resources.md
+++ b/topics/compute/openstack/docs/how-to-guides/maintaining-cloud-resources.md
@@ -5,4 +5,4 @@ search:
   exclude: false
 ---
 
-# Maintaining cloud resources
+# Maintaining Cloud Resources
diff --git a/topics/compute/openstack/docs/how-to-guides/manage-volumes.md b/topics/compute/openstack/docs/how-to-guides/manage-volumes.md
index 979a0f4d..f40702c4 100644
--- a/topics/compute/openstack/docs/how-to-guides/manage-volumes.md
+++ b/topics/compute/openstack/docs/how-to-guides/manage-volumes.md
@@ -5,13 +5,13 @@ search:
   exclude: false
 ---
 
-# Manage volumes
+# Manage Volumes
 
 When storing a large amount of data in a virtual machine instance, it is advisable to use a separate volume and not the
 root file system containing the operating system. It adds flexibility and often prevents data loss. Volumes can be
 attached and detached from instances at any time, their creation and deletion are managed separately from instances.
 
-## Creating volume
+## Creating Volume
 
 __1.__ In **Project &gt; Volumes &gt; Volumes**, select **Create Volume**.
 
@@ -32,7 +32,7 @@ For details, refer to [the official documentation](https://docs.openstack.org/ho
 
 It is possible to create volume snapshots or backups. In this guide we will focus on volume backups. When creating a backup it is recommended to turn off the instance if possible to prevent data errors.
 
-### Creating volume backup
+### Creating Volume Backup
 
 __1.__ __(optional)__ In **Project &gt; Compute &gt; Instances** Turn off the affected instance.
 
@@ -54,7 +54,7 @@ __3.__ Specify Backup Name and optional information and press **Create Volume Ba
 
 __3.__ Wait for the Backup to be created, it will be then stored in **Project &gt; Volumes &gt; Backups**.
 
-### Restoring volume backup
+### Restoring Volume Backup
 
 __1.__ __(optional)__ In **Project &gt; Compute &gt; Instances** Turn off the affected instance.
 
@@ -70,14 +70,14 @@ __2.__ In **Project &gt; Volumes &gt; Backups** open the **Actions** menu of sel
 
 __3.__ Wait for the Backup to be restored.
 
-## Volume resize
+## Volume Resize
 
 We can distinghuish two types of volumes, namely
 
 - Attachable volumes: additional volumes that don't contain the system image and the VM can startup without their presence.
 - System volumes: The boot image must be always present.
 
-### Resizing attachable volume
+### Resizing Attachable Volume
 
 When working with volumes, we highly recommend to always make a [volume backup](#creating-volume-backup) before any operations with the volume.
 
@@ -120,7 +120,7 @@ __6.__ Attach the volume back to the instance via **Manage Attachments**.
 __7.__ Verify correct mounting of the volume in the instance.
 
 
-### Resizing system volume
+### Resizing System Volume
 
 Resizing the system volume is not possible. It is however possible to create a backup of the system volume, make necessary changes and deploy new VM with the modified volume.
 
diff --git a/topics/compute/openstack/docs/how-to-guides/obtaining-api-key.md b/topics/compute/openstack/docs/how-to-guides/obtaining-api-key.md
index 3d0972f3..9279eeb3 100644
--- a/topics/compute/openstack/docs/how-to-guides/obtaining-api-key.md
+++ b/topics/compute/openstack/docs/how-to-guides/obtaining-api-key.md
@@ -5,7 +5,7 @@ search:
   exclude: false
 ---
 
-# Obtaining API key
+# Obtaining API Key
 
 __1.__ In **Identity &gt; Application Credentials**, click on **Create Application Credential**.
 __2.__ Choose name, description and expiration date & time.
diff --git a/topics/compute/openstack/docs/how-to-guides/using-backups.md b/topics/compute/openstack/docs/how-to-guides/using-backups.md
index e2aaf4e4..e70e0c47 100644
--- a/topics/compute/openstack/docs/how-to-guides/using-backups.md
+++ b/topics/compute/openstack/docs/how-to-guides/using-backups.md
@@ -5,4 +5,4 @@ search:
   exclude: false
 ---
 
-# Using backups
+# Using Backups
diff --git a/topics/compute/openstack/docs/how-to-guides/using-custom-linux-images.md b/topics/compute/openstack/docs/how-to-guides/using-custom-linux-images.md
index 3eeb597e..127ab391 100644
--- a/topics/compute/openstack/docs/how-to-guides/using-custom-linux-images.md
+++ b/topics/compute/openstack/docs/how-to-guides/using-custom-linux-images.md
@@ -5,4 +5,4 @@ search:
   exclude: false
 ---
 
-# Using custom linux images
+# Using Custom Linux Images
diff --git a/topics/compute/openstack/docs/how-to-guides/using-object-storage.md b/topics/compute/openstack/docs/how-to-guides/using-object-storage.md
index 7c73f063..b00cad0b 100644
--- a/topics/compute/openstack/docs/how-to-guides/using-object-storage.md
+++ b/topics/compute/openstack/docs/how-to-guides/using-object-storage.md
@@ -5,4 +5,4 @@ search:
   exclude: false
 ---
 
-# Using object storage
+# Using Object Storage
diff --git a/topics/compute/openstack/docs/technical-reference/cloud-resources.md b/topics/compute/openstack/docs/technical-reference/cloud-resources.md
index 2a6afa9e..1ee01e54 100644
--- a/topics/compute/openstack/docs/technical-reference/cloud-resources.md
+++ b/topics/compute/openstack/docs/technical-reference/cloud-resources.md
@@ -5,9 +5,9 @@ search:
   exclude: false
 ---
 
-# Cloud resources
+# Cloud Resources
 
-## Classification of application
+## Classification of Application
 
 Your application may be:
 
@@ -17,7 +17,7 @@ Your application may be:
 
 Applications running in a single cloud resource (`A.` and `B.`) are a direct match for MetaCentrum Cloud OpenStack. Distributed applications (`C.`) are best handled by [MetaCentrum PBS system](https://metavo.metacentrum.cz/cs/state/personal).
 
-## Maintaining cloud resources
+## Maintaining Cloud Resources
 
 Your project is computed within the MetaCentrum Cloud Openstack project where you can claim MetaCentrum Cloud Openstack resources (for example virtual machine, floating IP, ...). There are multiple ways how to set up the MetaCentrum Cloud Openstack resources:
 
@@ -29,7 +29,7 @@ Your project is computed within the MetaCentrum Cloud Openstack project where yo
 
 If your project infrastructure (MetaCentrum Cloud Openstack resources) within the cloud is static you may select a manual approach with [MetaCentrum Cloud Openstack Dashboard UI](https://dashboard.cloud.muni.cz). There are projects which need to allocate MetaCentrum Cloud Openstack resources dynamically, in such cases we strongly encourage automation even at this stage.
 
-## Transfering data to cloud
+## Transferring Data to Cloud
 
 There are several options how to transfer the project to cloud resources:
 
@@ -40,11 +40,11 @@ There are several options how to transfer the project to cloud resources:
  * indirectly in OpenStack (glance) image (you need to obtain image-uploader role)
    * OpenStack Glance images may be public, private, community or shared.
 
-### SSH to cloud VM resources and manual update
+### SSH to Cloud VM Resources and Manual Update
 
 In this scenario, you log into your cloud VM and perform all needed actions manually. This approach does not scale well, is not effective enough as different users may configure cloud VM resources in different ways resulting sometimes in different resource behavior.
 
-### Automated work transfer and synchronization with docker (or podman)
+### Automated Work Transfer and Synchronization With Docker (Or Podman)
 
 There are automation tools that may help you to ease your cloud usage:
 
@@ -56,7 +56,6 @@ Ansible is a cloud automation tool that helps you with:
 * keeping your VM updated
 * automatically migrating your applications or data to/from cloud VM
 
-
 Container runtime engine helps you to put yours into a container stored in a container registry.
 Putting your work into a container has several advantages:
 
@@ -74,7 +73,7 @@ As a container registry we suggest either:
 
 An example of such an approach is demonstrated in [`cloud-estimate-pi` project](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi).
 
-## Receiving data from experiments to your workstation
+## Receiving Data From Experiments to Your Workstation
 
 It certainly depends on how your data are stored, the options are:
 
@@ -85,8 +84,7 @@ It certainly depends on how your data are stored, the options are:
   * data stored in the S3 compatible database may be easily received via [minio client application MC](https://docs.min.io/docs/minio-client-complete-guide)
   * date stored in [OpenStack Swift python client `swift`](https://docs.openstack.org/python-swiftclient/train/swiftclient.html)
 
-
-## Highly available cloud application
+## Highly Available Cloud Application
 
 Let's assume your application is running in multiple instances in the cloud already.
 To make your application highly available (HA) you need to
@@ -96,8 +94,7 @@ To make your application highly available (HA) you need to
 
 Your application surely needs a Fully Qualified Domain Name (FQDN) address to become popular. Setting FQDN is done on the public floating IP linked to the load-balancer.
 
-
-## Cloud project example and workflow recommendations
+## Cloud Project Example and Workflow Recommendations
 
 This chapter summarizes effective cloud workflows on the (example) [`cloud-estimate-pi` project](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi).
 
@@ -117,7 +114,6 @@ The project recommendations are:
    * multiple ways how to execute the application (container cloud support advanced container life-cycle management)
 1. The project should have a changelog (either manually written or generated) (for instance [`CHANGELOG.md`](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/master/CHANGELOG.md))
 
-
 We recommend every project defines cloud usage workflow which may consist of:
 
 1. Cloud resource initialization, performing
@@ -133,7 +129,7 @@ We recommend every project defines cloud usage workflow which may consist of:
    * download of project data from cloud to user's workstation
 1. Cloud resource destroy
 
-## Road-map to effective cloud usage
+## Road-Map to Effective Cloud Usage
 
 Project automation is usually done in CI/CD pipelines. Read [Gitlab CI/CD article](https://docs.gitlab.com/ee/ci/introduction/) for more details.
 ![](https://docs.gitlab.com/ee/ci/introduction/img/gitlab_workflow_example_extended_v12_3.png)
@@ -147,9 +143,7 @@ The following table shows the different cloud usage phases:
 | [continuous delivery](https://docs.gitlab.com/ee/ci/introduction/#continuous-delivery) (automated, but deploy manual) | semi-automated (GUI + `ansible` executed manually) | container ([semver](https://semver.org) versioned) | semi-automated (`ansible` executed manually) | semi-automated (`ansible` executed manually) | semi-automated (`ansible` executed manually) | semi-automated (`ansible` and `ssh` manually)  |
 | [continuous deployment](https://docs.gitlab.com/ee/ci/introduction/#continuous-deployment) (fully-automated) | automated (`terraform` and/or `ansible` in CI/CD) | container ([semver](https://semver.org) versioned) | automated (`ansible` in CI/CD) | automated (`ansible` in CI/CD) | automated (`ansible` in CI/CD) | semi-automated (`ansible` in CI/CD and `ssh` manually)  |
 
-
-
-## How to convert the legacy application into a container for a cloud?
+## How to Convert Legacy Application Into Container for Cloud?
 
 Containerization of applications is one of the best practices when you want to share your application and execute it in the cloud. Read about [the benefits](https://cloud.google.com/containers).
 
diff --git a/topics/compute/openstack/docs/technical-reference/data-storage.md b/topics/compute/openstack/docs/technical-reference/data-storage.md
index 873b079a..cfbb75aa 100644
--- a/topics/compute/openstack/docs/technical-reference/data-storage.md
+++ b/topics/compute/openstack/docs/technical-reference/data-storage.md
@@ -5,7 +5,7 @@ search:
   exclude: false
 ---
 
-# Data storage
+# Data Storage
 
 Every project generates an amount of data that needs to be stored. There are options (sorted by preference):
 
diff --git a/topics/compute/openstack/docs/technical-reference/get-support.md b/topics/compute/openstack/docs/technical-reference/get-support.md
index bdb16e65..b9e616d2 100644
--- a/topics/compute/openstack/docs/technical-reference/get-support.md
+++ b/topics/compute/openstack/docs/technical-reference/get-support.md
@@ -5,7 +5,7 @@ search:
   exclude: false
 ---
 
-# Get support
+# Get Support
 
 If you approach a problem regarding OpenStack or other problems regarding MetaCentrum Cloud
 that are not described in this documentation, you can use our support.
diff --git a/topics/compute/openstack/docs/technical-reference/image-rotation.md b/topics/compute/openstack/docs/technical-reference/image-rotation.md
index 9a82331e..522ae890 100644
--- a/topics/compute/openstack/docs/technical-reference/image-rotation.md
+++ b/topics/compute/openstack/docs/technical-reference/image-rotation.md
@@ -5,7 +5,7 @@ search:
   exclude: false
 ---
 
-# Image rotation
+# Image Rotation
 
 Image rotation in our cloud is done on 2 month basis. Timestamps are added to old images.
 
diff --git a/topics/compute/openstack/docs/technical-reference/openstack-management.md b/topics/compute/openstack/docs/technical-reference/openstack-management.md
index 7e9b535b..f74d4286 100644
--- a/topics/compute/openstack/docs/technical-reference/openstack-management.md
+++ b/topics/compute/openstack/docs/technical-reference/openstack-management.md
@@ -5,4 +5,4 @@ search:
   exclude: false
 ---
 
-# OpenStack management
+# OpenStack Management
diff --git a/topics/compute/openstack/docs/technical-reference/openstack-modules.md b/topics/compute/openstack/docs/technical-reference/openstack-modules.md
index 5b2ef5a7..8ff0ff77 100644
--- a/topics/compute/openstack/docs/technical-reference/openstack-modules.md
+++ b/topics/compute/openstack/docs/technical-reference/openstack-modules.md
@@ -5,7 +5,7 @@ search:
   exclude: false
 ---
 
-# OpenStack modules
+# OpenStack Modules
 
 The following table contains a list of deployed OpenStack services. Services are separated
 into two groups based on their stability and the level of support we are able to provide. All services in the production
diff --git a/topics/compute/openstack/docs/technical-reference/openstack-status.md b/topics/compute/openstack/docs/technical-reference/openstack-status.md
index 41af2a53..46b54873 100644
--- a/topics/compute/openstack/docs/technical-reference/openstack-status.md
+++ b/topics/compute/openstack/docs/technical-reference/openstack-status.md
@@ -5,6 +5,6 @@ search:
   exclude: true
 ---
 
-# OpenStack status
+# OpenStack Status
 
 TODO: export status of openstack services.
diff --git a/topics/compute/openstack/docs/technical-reference/quota-limits.md b/topics/compute/openstack/docs/technical-reference/quota-limits.md
index 22bc2004..5cf3407b 100644
--- a/topics/compute/openstack/docs/technical-reference/quota-limits.md
+++ b/topics/compute/openstack/docs/technical-reference/quota-limits.md
@@ -5,11 +5,12 @@ search:
   exclude: false
 ---
 
-# Quota limits
+# Quota Limits
 
 Quotas are used to specify individual resources for each project. In the following tables you can see the default resources available for each project. If you need to increase these resources, you can contact [support](../technical-reference/get-support.md).
 
-## Compute resources (Nova)
+## Compute Resources (Nova)
+
 | resource             | quota |
 |----------------------|-------|
 | instances            | 5     |
@@ -19,7 +20,8 @@ Quotas are used to specify individual resources for each project. In the followi
 | server_groups        | 10    |
 | server_group_members | 10    |
 
-## Network resources (Neutron)
+## Network Resources (Neutron)
+
 | resource            | quota |
 |---------------------|-------|
 | network             | 1     |
@@ -30,7 +32,8 @@ Quotas are used to specify individual resources for each project. In the followi
 | security_group      | 10    |
 | security_group_rule | 100   |
 
-## Load balancer resources (Octavia)
+## Load Balancer Resources (Octavia)
+
 | resource        | quota |
 |-----------------|-------|
 | loadbalancer    | 1     |
@@ -39,7 +42,8 @@ Quotas are used to specify individual resources for each project. In the followi
 | pool            | 5     |
 | health_monitors | 10    |
 
-## Data storage (Cinder)
+## Data Storage (Cinder)
+
 | resource             | quota     |
 |----------------------|-----------|
 | gigabytes            | 1000      |
@@ -49,13 +53,15 @@ Quotas are used to specify individual resources for each project. In the followi
 | backups              | 10        |
 | groups               | 10        |
 
-## Image storage (Glance)
+## Image Storage (Glance)
+
 | resource      | quota |
 |---------------|-------|
 | properties    | 128   |
 | image_storage | 2000  |
 
-## Secret storage (Barbican)
+## Secret Storage (Barbican)
+
 | resource    | quota |
 |-------------|-------|
 | secrets     | 20    |
diff --git a/topics/compute/openstack/docs/technical-reference/remote-access.md b/topics/compute/openstack/docs/technical-reference/remote-access.md
index c19dff01..a88f5e2d 100644
--- a/topics/compute/openstack/docs/technical-reference/remote-access.md
+++ b/topics/compute/openstack/docs/technical-reference/remote-access.md
@@ -5,11 +5,11 @@ search:
   exclude: false
 ---
 
-# Remote access
+# Remote Access
 
-## Accessing from Linux
+## Accessing From Linux
 
-### Setting up VPN tunnel via encrypted SSH with [sshuttle](https://github.com/sshuttle/sshuttle)
+### Setting Up VPN Tunnel Via Encrypted SSH With [sshuttle](https://github.com/sshuttle/sshuttle)
 
 ``` sh
 # terminal A
@@ -31,7 +31,7 @@ fi
 sshuttle -r centos@147.251.21.72 172.16.0.0/22
 ```
 
-### Accessing (hidden) project VMs through the VPN tunnel
+### Accessing (Hidden) Project VMs Through VPN Tunnel
 
 ``` sh
 # terminal B
@@ -44,17 +44,18 @@ $ curl 172.16.1.67:8080
 Hello, world, cnt=1, hostname=freznicek-ubu
 ```
 
-## Accessing from Windows
+## Accessing From Windows
 
 [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/faq.html#faq-what) is a client program for the SSH on Windows OS.
 
 ### PuTTY Installer
+
 We recommend downloading [Windows Installer](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) with PuTTY utilities as:
 
 * Pageant (SSH authentication agent) - store the private key in memory without the need to retype a passphrase on every login
 * PuTTYgen (PuTTY key generator) - convert OpenSSH format of id_rsa to PuTTY ppk private key and so on
 
-### Connect to the Instance
+### Connect to Instance
 
 * Run PuTTY and enter Host name in format "login@Floating IP address" where login is for example debian for Debian OS and Floating IP is [IP address](../how-to-guides/associate-floating-ips.md) to access instance from internet.
 * In Category -> Connection -> SSH -> Auth:
@@ -68,7 +69,6 @@ We recommend downloading [Windows Installer](https://www.chiark.greenend.org.uk/
 
 ![](/compute/openstack/images/putty/putty-connect2instance.png)
 
-
 ### Pageant SSH Agent
 
 * Run Pageant from Windows menu
@@ -81,27 +81,24 @@ We recommend downloading [Windows Installer](https://www.chiark.greenend.org.uk/
 
 ![](/compute/openstack/images/putty/pageant-add-key.png)
 
-
 ### Key Generator
 
 PuTTYgen is the PuTTY key generator. You can load in an existing private key and change your passphrase or generate a new public/private key pair or convert to/from OpenSSH/PuTTY ppk formats.
 
-### Convert OpenSSH format to PuTTY ppk format
+### Convert OpenSSH Format to PuTTY ppk Format
 
 * Run PuTTYgen, in the menu Conversion -> Import key browse and load your OpenSSH format id_rsa private key using your passphrase
 * Save PuTTY ppk private key using button **Save private key**, browse destination for PuTTY format id_rsa.ppk, and save file
 
 ![](/compute/openstack/images/putty/puttygen-openssh2ppk.png)
 
-
-### Convert PuTTY ppk private key to OpenSSH format
+### Convert PuTTY ppk Private Key to OpenSSH Format
 
 * Run PuTTYgen, in the menu File -> Load private key browse and open your private key in format PuTTY ppk using your passphrase
 * In the menu Conversion -> Export OpenSSH key browse destination for OpenSSH format id_rsa and save file
 
 ![](/compute/openstack/images/putty/puttygen-ppk2openssh.png)
 
-
 ### Change Password for Existing Private Key Pair
 
 * Load your existing private key using button **Load**, confirm opening using your passphrase
@@ -110,8 +107,7 @@ PuTTYgen is the PuTTY key generator. You can load in an existing private key and
 
 ![](/compute/openstack/images/putty/puttygen-passphrase.png)
 
-
-### Generate a New Key Pair
+### Generate New Key Pair
 
 * Start with **Generate button**
 * Generate some randomness by moving your mouse over the dialog
diff --git a/topics/compute/openstack/docs/technical-reference/service-level-indicators.md b/topics/compute/openstack/docs/technical-reference/service-level-indicators.md
index a6fba226..f813b37b 100644
--- a/topics/compute/openstack/docs/technical-reference/service-level-indicators.md
+++ b/topics/compute/openstack/docs/technical-reference/service-level-indicators.md
@@ -5,6 +5,6 @@ search:
   exclude: true
 ---
 
-# SLI
+# Service Level Indicators
 
 TODO: obtain SLI of our cloud
diff --git a/topics/compute/openstack/docs/technical-reference/volume-usage.md b/topics/compute/openstack/docs/technical-reference/volume-usage.md
index 5b23c977..bc054b86 100644
--- a/topics/compute/openstack/docs/technical-reference/volume-usage.md
+++ b/topics/compute/openstack/docs/technical-reference/volume-usage.md
@@ -5,4 +5,4 @@ search:
   exclude: false
 ---
 
-# Volume usage
+# Volume Usage
diff --git a/topics/compute/sensitive/docs/get-project.md b/topics/compute/sensitive/docs/get-project.md
index a944d919..159d7193 100644
--- a/topics/compute/sensitive/docs/get-project.md
+++ b/topics/compute/sensitive/docs/get-project.md
@@ -1,4 +1,4 @@
-# New project in the SensitiveCloud
+# New Project in SensitiveCloud
 
 We are committed to providing a secure and reliable research environment for data processing, storage, and sharing. Therefore, we have created an on-boarding process for every principal investigator who wants to use the SensitiveCloud.
 
@@ -6,7 +6,7 @@ During this process, our team will work closely with you to determine if our Tru
 
 Ready?
 
-# Requesting project 
+# Requesting Project 
 
 Resources of SensitiveCloud can be obtained by contacting support@e-infra.cz. 
 
@@ -38,11 +38,10 @@ We will arrange a meeting, where details of your use-case will be discussed.
     Principal investigator's name    
     (Digitaly signed)
 
-# What's next?
+# What's Next?
 
 - [Connecting to the SensitiveCloud environment][1]
 - [Manage who has access to your SensitiveCloud project and data][2]
 
 [1]: ../getting-started/connecting-to-sensitive-cloud
 [2]: ../manage-project
-
diff --git a/topics/compute/sensitive/docs/index.md b/topics/compute/sensitive/docs/index.md
index 1bdd3263..91e03aa3 100644
--- a/topics/compute/sensitive/docs/index.md
+++ b/topics/compute/sensitive/docs/index.md
@@ -21,28 +21,29 @@ The environment is built on modern container technology using Kubernetes platfor
 
 [Read more at SensitiveCloud product page][1]
 
-## Getting access to the SensitiveCloud
+## Getting Access to SensitiveCloud
+
 If you want to use the SensitiveCloud services and you are a principal investigator (PI), ask to be allocated a computing time in the SensitiveCloud. The PI should be a research group/activity leader, and is responsible for the access to the data and utilization of computing time.
 
 - [Requesting project in SensitiveCloud][2]    
 - [Manage access to the project in SensitiveCloud][3]
 
 ## Using SensitiveCloud
+
 Let’s deep into the technical aspects of working with SensitiveCloud. This section is dedicated to anyone, who will be deploying its computing job or run applications.
 
 - [Connecting to the SensitiveCloud management][4]
 - [Working with the SensitiveCloud][5]
 - Deploying applications in the SensitiveCloud (TODO)
 
-## Learn more
+## Learn More
 
 > The SensitiveCloud environment is provided by CERIT-SC, which is an organisational unit of the Institute of Computing at Masaryk University and one of the three members of the large research infrastructure e-INFRA CZ.
 
-
 [1]: https://cerit-sc.cz/infrastructure-services/trusted-environment-for-sensitive-data
 [2]: ./get-project
 [3]: ./manage-project
 [4]: ./getting-started/connecting-to-sensitive-cloud
 [5]: https://docs.cerit.io
 
-[migration]: ./migration-from-muni.md
\ No newline at end of file
+[migration]: ./migration-from-muni.md
diff --git a/topics/compute/sensitive/docs/manage-project.md b/topics/compute/sensitive/docs/manage-project.md
index 626dd6af..ea07a73d 100644
--- a/topics/compute/sensitive/docs/manage-project.md
+++ b/topics/compute/sensitive/docs/manage-project.md
@@ -1,4 +1,4 @@
-# Project administration
+# Project Administration
 
 After we have created a project for you in sensitivecloud, you can manage the following settings as a principal investigator:
 
@@ -10,44 +10,54 @@ After we have created a project for you in sensitivecloud, you can manage the fo
 Access rights are manageable through the group created for your project, for example `sc_modeling-deathstar-aerodynamics`. In case you have more activities requiring computing resources of SensitiveCloud, you will find more groups starting with `sc_`. Please be careful when selecting the right group when adding/removing members.    
 This groups can be found at the following link [perun.e-infra.cz/organizations/3898/groups][vo-einfracz-groups].
 
-### Adding colleagues to use your project resources
+### Adding Colleagues to Use Your Project Resources
+
+#### Step 1: Identify Right Group
 
-#### Step 1: Identify the right group
 In the [identity management system][vo-einfracz-groups], search for the name of the group for SensitiveCloud. It starts with prefix `sc_`.
 
 #### Step 2: Select Members
+
 After selecting the right group in the table, click on the "Members" tile. You will enter membership management of the group.
 
-#### Step 3: Invite new member
+#### Step 3: Invite New Member
+
 Select "Invite" and select if you want to invite one or more members.
 
-#### Step 4: Insert new member personal information
+#### Step 4: Insert New Member Personal Information
+
 You will have to input members `name` and `e-mail`. Then hit "Invite" button at the bottom of the window.
 
 #### Congratulations
+
 The user will obtain an e-mail notification, where he/she will have to login with the home organization and fill out missing personal information.
 
-### Remove colleagues from using your project resources
+### Remove Colleagues From Using Your Project Resources
+
+#### Step 1: Identify Right Group
 
-#### Step 1: Identify the right group
 In the [identity management system][vo-einfracz-groups], search for the name of the group for SensitiveCloud. It starts with prefix `sc_`.
 
 #### Step 2: Select Members
+
 After selecting the right group in the table, click on the "Members" tile. You will enter membership management of the group.
 
-#### Step 3: Check all members to remove
+#### Step 3: Check All Members to Remove
+
 In the table, check all members, you would like to remove from the group. You may use the search to identify the right member to remove.
 
 #### Step 4: Remove
+
 Select "Remove" button to remove selected members from the groups.
 
 #### Congratulations
+
 The users will no longer have access to the SensitiveCloud resources that are provided by this group.
 
-## Modify allocated project resources
+## Modify Allocated Project Resources
 
 We are sorry but at the moment this is not self-service operation.
 Please, send us a amount of requested resources to the e-mail: trusted@ics.muni.cz.
 
 [vo-einfracz-groups]: https://perun.e-infra.cz/organizations/3898/groups
-[1]: https://perunaai.atlassian.net/wiki/spaces/PERUN/pages/94732289/Add+member+to+group
\ No newline at end of file
+[1]: https://perunaai.atlassian.net/wiki/spaces/PERUN/pages/94732289/Add+member+to+group
diff --git a/topics/compute/sensitive/docs/migration-from-muni.md b/topics/compute/sensitive/docs/migration-from-muni.md
index 64a05895..b5e6090d 100644
--- a/topics/compute/sensitive/docs/migration-from-muni.md
+++ b/topics/compute/sensitive/docs/migration-from-muni.md
@@ -1,4 +1,4 @@
-# We are expanding to help all scientists in the Czech Republic
+# We Are Expanding to Help All Scientists in the Czech Republic
 
 We are excited to announce that we are expanding our SensitiveCloud service beyond Masaryk University and making it available to the entire e&#8209;INFRA&#160;CZ community. This means that our platform will be accessible to a wider range of researchers and organizations, increasing the potential for collaboration and innovation.
 
@@ -6,11 +6,11 @@ As part of this expansion, we will be changing the way users access the manageme
 
 We understand that changes like this can be disruptive, and we apologize for any inconvenience this may cause. However, we believe that this change will help us to provide an even better service to our users by making the platform more accessible and easier to use.
 
-# How will this change affect you?
+# How Will This Change Affect You?
 
 **No users deployment and running applications will be affected by this change.**
 
-## Principal investigators
+## Principal Investigators
 
 - From now on, you will have to manage the access to the SensitiveCloud resources for your collaborators exclusively from the identity management system of e&#8209;INFRA&#160;CZ. The system is located at: [perun.e‑infra.cz](https://perun.e-infra.cz).
 - New mechanism for managing collaborators over SensitiveCloud resources via applications. [How to add/remove collaborators to SensitiveCloud project](../manage-project)
diff --git a/topics/managed/docs/network/secure-vpn/index.md b/topics/managed/docs/network/secure-vpn/index.md
index a570c86d..6b97ed34 100644
--- a/topics/managed/docs/network/secure-vpn/index.md
+++ b/topics/managed/docs/network/secure-vpn/index.md
@@ -3,10 +3,13 @@
 Virtual Private Network (VPN) is used to connect to the secure environment that is **isloated from the Internet**. The CERIT-SC VPN solution is based on [WireGuard](https://www.wireguard.com/) software.   
 
 To use the VPN you will need to request access and configuration, please refer to the next section.
-## Obtaining access to VPN
+
+## Obtaining Access to VPN
 
 If you are interested in using the VPN to connect to secured network and resources isolated from public network, please contact us at `k8s(at)ics.muni.cz`.
+
 ## Connecting to VPN
+
 Tutorials will show how to setup **WireGuard** with the configuration you have otained from CERIT-SC Team.
 
 === "Windows"
@@ -48,7 +51,8 @@ Tutorials will show how to setup **WireGuard** with the configuration you have o
       Endpoint = SERVER_IP_ADDRESS:PORT
       AllowedIPs = 0.0.0.0/0
       ```
-      3. In order to activate the tunnel, enter into `terminal` and use following command:   
+      3. In order to activate the tunnel, enter into `terminal` and use following command:
+         
       ```
       wg-quick up wg0
       ```
diff --git a/topics/managed/docs/portals/binderhub/index.md b/topics/managed/docs/portals/binderhub/index.md
index 8e5a8c17..9062769a 100644
--- a/topics/managed/docs/portals/binderhub/index.md
+++ b/topics/managed/docs/portals/binderhub/index.md
@@ -13,22 +13,27 @@ sidebar:
 BinderHub is a Binder instance working on Kubernetes located on [binderhub.cloud.e-infra.cz](https://binderhub.cloud.e-infra.cz/). Binder turns a Git repo into collection of interactive notebooks. It is enough to fill the git repository name (optionally specific notebook or branch) and binderhub will turn it into a web notebook. 
 
 ## Authentication
+
 To use CERIT-SC BinderHub instance, you have to authenticate. Authentication is performed via unified login. 
 
 ## Persistence
+
 After the notebook is spawned, a persistent volume is mounted to path `/home/{username}-nfs-pvc`. The same persistent volume is mounted to mentioned path in each notebook you spawned. Therefore, if you want to use data generated in BinderHub instance *A* in instance of BinderHub *B*, you can write the data to path `/home/{username}-nfs-pvc` and they will be available for use. 
 
 Note: Pay attention to paths used in notebooks. Imagine you have two BinderHub running. In both, you write outputs to location `/home/{username}-nfs-pvc/test`. If both notebooks create file named `result.txt`, you would be overwriting the file. It is a good practice to create new directory in path `/home/{username}-nfs-pvc` for each BinderHub instance. 
 
 ## Resources
+
 Each user on your JupyterHub can use certain amount of memory and CPU. You are guaranteed **1G of RAM** and **1 CPU**. Resource limits represent a hard limit on the resources available. There are **16G of RAM** and **8 CPU** limits placed which means you can't use more than 16G of RAM and 8 CPUs, no matter what other resources are being used on the machines. 
 
 If you need more resources, please contact us at <a href="mailto:k8s@ics.muni.cz">IT Service desk</a>.
 
-## Where to find running notebooks
+## Where to Find Running Notebooks
+
 Your running notebooks can be found at `https://bhub.cloud.e-infra.cz/`. Clicking on address redirects you to the notebook instance. Because redirection links include random strings it is advised to work in one browser where cookies can be stored and you don;t have to remember long notebook addresses. Also, avoid incognito windows because the session cookie won't save and when you close the tab, you will not find the instance in control panel. 
 
 ## Limits
+
 Currently, every user is limited to spawn 5 projects. If you reach quota but you want to deploy new instance, an error will appear under loading bar of BinderHub index page.
 
 ![projects_limit](limit.png)
@@ -47,8 +52,8 @@ To spawn new instance, you have to delete one of your running instances.  This c
 
 The hub spawns notebook instances with default image not conatining any special libraries. However, you can create custom `Dockerfile` with all dependencies and it will be used as base image. The `Dockerfile` must be located in the repository you are going to launch in Binder. 
 
-When creating the `Dockerfile` bear in mind it has to be runnable under *user*. Furthermore, it is important to `chown` all used directories to user, e.g. :
+When creating the `Dockerfile` bear in mind it has to be runnable under *user*. Furthermore, it is important to `chown` all used directories to user, e.g.:
+
 ```
 RUN chown -R 1000:1000 /work /home/jovyan
 ```
-
diff --git a/topics/managed/docs/portals/index.md b/topics/managed/docs/portals/index.md
index 7fbcb4f3..cd8498a7 100644
--- a/topics/managed/docs/portals/index.md
+++ b/topics/managed/docs/portals/index.md
@@ -1 +1 @@
-# Computing web portals
\ No newline at end of file
+# Computing Web Portals
diff --git a/topics/managed/docs/portals/jupyterhub/index.md b/topics/managed/docs/portals/jupyterhub/index.md
index 4ef63727..c8964d8c 100644
--- a/topics/managed/docs/portals/jupyterhub/index.md
+++ b/topics/managed/docs/portals/jupyterhub/index.md
@@ -13,7 +13,7 @@ sidebar:
 
 We provide a JupyterHub running on Kubernetes for every MetaCentrum member. The hub can be accessed on [hub.cloud.e-infra.cz](https://hub.cloud.e-infra.cz/). Sign in with meta username (do not use @META, only username). 
 
-## Choosing image
+## Choosing Image
 
 Any Jupyter notebook image can be run, options already provided:
 - minimal-notebook ([spec](https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html#jupyter-minimal-notebook))
@@ -26,9 +26,9 @@ If you choose custom, you have to provide image name together with its repo and
 `minimal-notebook` is chosen as default image.
 
 
-## Choosing storage
-By default, every notebook runs with persistent storage mounted to `/home/jovyan`. Therefore, we recommend to save the data to `/home/jovyan` directory to have them accessible every time notebook is spawned. Same persistent storage is mounted to all your notebooks so you can share data across multiple instances. Furthermore, in case you delete all you JupyterHub notebook instances and spawn new one later, again same persistent storage is mounted. Therefore your data are preserved across instances and across spawns.
+## Choosing Storage
 
+By default, every notebook runs with persistent storage mounted to `/home/jovyan`. Therefore, we recommend to save the data to `/home/jovyan` directory to have them accessible every time notebook is spawned. Same persistent storage is mounted to all your notebooks so you can share data across multiple instances. Furthermore, in case you delete all you JupyterHub notebook instances and spawn new one later, again same persistent storage is mounted. Therefore your data are preserved across instances and across spawns.
 
 Optionally, you can mount your MetaCentrum home - check the option and select the desired home. Now, it is possible to mount only one home per notebook. In hub, your home is located in `/home/meta/{meta-username}`.
 
@@ -42,17 +42,20 @@ liberec3-tul | ostrava1 | ostrava2-archive | pruhonice1-ibot | praha5-elixir
 plzen1 | plzen4-ntis                   
 
 ## Resources
+
 Each user on JupyterHub can use certain amount of memory and CPU. You are guaranteed **1G of RAM** and **1 CPU**. Resource limits represent a hard limit on the resources available. There are **256G of RAM** and **32 CPU** limits placed which means you can't use more than 256G of RAM and 32 CPUs for that specific instance.
 
 It is possible to utilize GPU in your notebook. Using GPU requires particular setting (e.g. drivers, configuration) so it can be really used only in Tensorflow image with GPU support. You can request at most 2 whole GPUs. 
 
-## Named servers
+## Named Servers
+
 In the top left corner, go to `File &rarr; Hub Control Panel`. Fill in the `Server name` and click on `Add new server`, you will be presented with input form page. 
 
 ![add1](add1.png)
 ![add2](add2.png)
 
-## Conda environment
+## Conda Environment
+
 Conda is supported in all provided images and we can assure its functionality. 
 
 New conda environment in hub's terminal is created with command `conda create -n tenv --yes python=3.8 ipykernel nb_conda_kernels` (`ipykernel nb_conda_kernels` part is required, alternatively irkernel). 
@@ -64,26 +67,30 @@ Check if environment is installed with `conda env list`. You can use the environ
 ![checkenv](check_env.png)
 ![selenv](select_env.png)
 
-## Install Conda packages
+## Install Conda Packages
+
 To install conda packages you have to create new conda environment (as described above). Then, install new packages in terminal into newly created environment e.g. `conda install keyring -n myenv`. 
 
 Open new notebook and change the kernel in tab `Kernel` → `Change Kernel...` → `myenv` (or the nemae of kernel you installed packages into).
 
 
-## Error handling
+## Error Handling
+
 You receive _HTTP 500:Internal Server Error_ when accessing the URL `/user/your_name`. Most likely, this error is caused by:
+
 1. You chose MetaCentrum home you haven't used before - The red 500 Error is followed by `Error in Authenticator.pre_spawn_start`
 2. You chose MetaCentrum home you don't have access to - The red 500 Error is followed by `Error in Authenticator.pre_spawn_start`
 3. While spawning, `Error: ImagePullBackOff` appears
 
 Solutions:
+
 1. Log out and log in back
 2. You can not access the home even after logging out and back in - you are not permitted to use this particular home
 3. Clicking on small arrow `Event log` provides more information. Most certainly, a message tagged `[Warning]` is somewhere among all of them and it provides more description. It is highly possible the repo and/or image name is misspelled. 
  - please wait for 10 minutes
  - The service has a timeout of 10 minutes and during this time, it is trying to create all necessary resources. Due to error, creation won't succeed and after 10 minutes you will see red progress bars with message `Spawn failed: pod/jupyter-[username] did not start in 600 seconds!`. At this point, it is sufficient to reload the page and click on `Relaunch server`.
 
-## I've chosen wrong home! What now?!
+## I've Chosen Wrong Home! What Now?
 
 If notebook is already running, in the top left corner, go to `File ` &rarr; `Hub Control Panel` and click red `Stop My Server`. In a couple of seconds, your container notebook instance will be deleted (stop button disapperas) and you can again `Start Server` with different home. 
 
@@ -91,8 +98,6 @@ Alternatively, you can create another named server. Fill in the `Server name` an
 
 All of your named server are accessible under `Hub Control panel` where you can manipulate with them (create, delete, log in to).
 
+## Feature Requests
 
-## Feature requests
 Any tips for features or new notebook types are welcomed at <a href="mailto:k8s@ics.muni.cz">IT Service desk</a>.
-
-
diff --git a/topics/managed/docs/workflow-execution/teswes.md b/topics/managed/docs/workflow-execution/teswes.md
index 1a288cdd..fdf6c296 100644
--- a/topics/managed/docs/workflow-execution/teswes.md
+++ b/topics/managed/docs/workflow-execution/teswes.md
@@ -10,11 +10,14 @@ sidebar:
 ---
 
 # TESK/WES
+
 As members of ELIXIR we support TESK/WES deployment, developement and infrastructure. 
 
 ## TESK
+
 [TESK](https://github.com/EMBL-EBI-TSI/TESK) is an implementation of a task execution engine based on the [TES API](https://github.com/ga4gh/task-execution-schemas). [WES](https://github.com/elixir-cloud-aai/cwl-WES) is a complementary microservice which enables clients/users to execute CWL workflows via a TES-compatible execution backend (e.g., TESK or Funnel). 
 
-### Czech enpoints are:
+### Czech Enpoints
+
 - *TESK*: [tesk-prod.cerit-sc.cz/swagger-ui.html](https://tesk-prod.cerit-sc.cz/swagger-ui.html)
 - *WES*: [wes-prod.cerit-sc.cz/ga4gh/wes/v1/ui/](https://wes-prod.cerit-sc.cz/ga4gh/wes/v1/ui/)
-- 
GitLab