Skip to content

The management repo and the documentation of the SCS hardware landscape

License

Notifications You must be signed in to change notification settings

SovereignCloudStack/hardware-landscape

Documentation of the SCS Hardware Landscape

Purpose of this repository

This Git repository documents, configures and automates the management and setup of the Sovereign Cloud Stack hardware environment. This environment is build in the context of the VP18 project workpackage and in running at a colocation of JH Computers.

The main goals of this environment are:

  • Runtime environment for the blueprint of the "SCS Turnkey Solution"
    • Run, test and demontrate all components
    • Demonstration showroom for interested parties (A blueprint for potential SCS operators to get an idea of the OSISM setup)
    • Training and demonstration environment
    • Testing of new releases (Environment for the future execution of QA tests)
    • Reproduce and analyze production problems on a real system
    • Develop and test topics that can only be analyzed, tested or reproduced on a real system
    • Dogfooding / continuous operation with real workload
    • Develop operational procdures
    • Develop and test certification tests
  • Provide a network testing lab for:
    • Switch/network automation
    • Sonic packaging
    • Evaluating network architecture concepts
    • Test environment to evaluate and test concepts and implementations in layer-3 underlay networking with SONiC
    • Test environment to evaluate and test improvements to the monitoring stack
  • Run a "production-like" experimentation and testing plattform

A visual impression

 

References

Environment Links

Zone 1 Environment

:::info

This is list is incomplete.

:::

Name URL Username Password Key Note
Horizon (via Keystone) https://api.zone1.landscape.scs.community admin keystone_admin_password domain: default
Horizon (via Keystone) https://api-internal.zone1.landscape.scs.community admin keystone_admin_password domain: default
ARA https://ara.zone1.landscape.scs.community:8120 ara ara_password
Ceph https://api-internal.zone1.landscape.scs.community:8140 admin
Flower https://flower.zone1.landscape.scs.community
Grafana https://api-internal.zone1.landscape.scs.community:3000 admin grafana_admin_password
Homer https://homer.zone1.landscape.scs.community
Keycloak https://keycloak.zone1.landscape.scs.community/auth admin
Netbox https://netbox.zone1.landscape.scs.community admin password
Netdata http://testbed-manager.zone1.landscape.scs.community:19999
Nexus https://nexus.zone1.landscape.scs.community admin
OpenSearch Dashboards https://api.zone1.landscape.scs.community:5601 opensearch opensearch_dashboards_password
Prometheus https://api-internal.zone1.landscape.scs.community:9091 admin
RabbitMQ https://api-internal.zone1.landscape.scs.community:15672 openstack rabbitmq_password
phpMyAdmin https://phpmyadmin.zone1.landscape.scs.community root database_password
Webserver http://files.zone1.landscape.scs.community:18080/ n/a n/a Install Files
DNS https://portal.cnds.io
HAProxy (testbed-node-0) http://testbed-node-0.zone1.landscape.scs.community:1984 openstack
HAProxy (testbed-node-1) http://testbed-node-1.zone1.landscape.scs.community:1984 openstack
HAProxy (testbed-node-2) http://testbed-node-2.zone1.landscape.scs.community:1984 openstack

You can gather the passwords using the following command: (see also information about vault in the System Runbook

make ansible_vault_show FILE=all|grep "<Password Key>:"