-
Notifications
You must be signed in to change notification settings - Fork 186
HackWeek 20: Uyuni SUSE Manager containerization project Summary
Proxy and also the Server (together with Hub) are designed to be a "System". It need to be registered and managed. Some time need to be spend in design and implementation doing things different.
-
Package dependencies are made for a bare metal installation. Packages, which should be installed on the same host have them. In containers with single services this is not true anymore. Dependencies should only be there where no network exists between the services.
-
We have a lot of required dependencies which bring us more than we need.
Example: A Container for taskomatic.
Taskomatic is executing tasks and sometimes scripts. We need dependencies to:
- spacewalk-java : all the tomcat libs are used in taskomatic as well
- all the external java lib dependencies
- spacewalk-backend: taskomatic calls spacewalk-repo-sync, mgr-inter-sync, etc.
- susemanager-tools: taskomatic calls mgr-create-bootstrap-repo
With this we get 3 of our biggest packages in 1 container incl. all its dependencies.
We need to re-visit the dependencies to only get the stuff into the containers we really need there.
SUSE Manager generates a lot of logs. Often multiple logs per service. The separation is needed for a better overview and easier finding the real problems. But it seems it is not "best practice" to collect logs in a log volume.
We need to decide if we want to go via best practice or if we continue with good known ways.
- How to debug things when running in a container?
- How to get all information we need for L3s? (supportconfig, spacewalk-debug)
- How to tell the customer to fetch files from inside of the container, or execute SQL statements to get data?
Before we ship a product as container we better have answers to these questions.
SUSE Manager uses a lot of commandline tools. Using them inside of a container could be a bad idea. Especially when you not find a full system in the container but just a service.
A requirement could be to make every functionality available via WebUI/XMLRPC API.
The setup procedure needs to be implemented from scratch. The current existing setup scripts can only be used as a guideline. The setup of single services in a container is too different that the existing scripts can be used.
The RBS server is configured via salt formulas. But if the server is only a service without salt it cannot be used. Different concept is required when RBS should become a container.
The same is true for Hub and normal Server.
- apache
- squid
- salt-broker
- sshd (for sshush)
- tftpd (optional)
- jabberd (traditional)
- salt-minion (?)
- dhcpd (retail)
- dns (retail)
- apache
- tomcat
- taskomatic
- rhn-search
- cobbler
- tftpd
- salt-master
- salt-api
- postgresql
- jabberd (traditional)
- osa-dispatcher (traditional)
- websockify (?)
- postfix (or another way to send mails)