Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rockons: Invalid environment variable (...) #1588

Closed
daniel-illi opened this issue Dec 17, 2016 · 27 comments
Closed

Rockons: Invalid environment variable (...) #1588

daniel-illi opened this issue Dec 17, 2016 · 27 comments
Assignees

Comments

@daniel-illi
Copy link
Contributor

While developing the metadata json file for a new rockon, I can into this error with one of my defined environment variables:

Houston, we've had a problem.
Invalid environment variabled(POSTGRES_PASSWORD)

            Traceback (most recent call last):
  File "/opt/rockstor/eggs/gunicorn-0.16.1-py2.7.egg/gunicorn/workers/sync.py", line 34, in run
    client, addr = self.socket.accept()
  File "/usr/lib64/python2.7/socket.py", line 202, in accept
    sock, addr = self._sock.accept()
error: [Errno 11] Resource temporarily unavailable

I found the line where this error is generated on line 142 of the file src/rockstor/storageadmin/views/rockon_id.py:

140:                    for e in env_map.keys():
141:                        if (not DContainerEnv.objects.filter(container=co, key=e).exists()):
142:                            e_msg = ('Invalid environment variabled(%s)' % e)
143:                            handle_exception(Exception(e_msg), request)

Initially I thought this happens when I define an environment variable that has no corresponding ENV set in the Dockerfile, but it didn't help when I defined the same variable in the Dockerfile, the same error was thrown.

I ran into the same problem before, as did someone else:

@phillxnet
Copy link
Member

@daniel-illi Thanks for reporting this and linking to the other report.
Haven't really looked into this but did notice that both of the sighted reports involve the same env var name:

MYSQL_ROOT_PASSWORD

and your report here has an equally repeatable var name "POSTGRES_PASSWORD". I wonder if we are looking at some kind of accidental repeating env name / failure to isolate / or update type think here.
The mariadb.json already uses MYSQL_ROOT_PASSWORD but I don't see any existing repo rock-on using "POSTGRES_PASSWORD". Also all sighted examples contain an underscore but I don't see how this is a problem.

Could you check to see if any of your locally defined and installed Rock-ons use the same env name. Or this could be some other bug in the container object key storage.

Also noting we have a typo in that exception message: "variabled".

@daniel-illi
Copy link
Contributor Author

daniel-illi commented Dec 18, 2016

@phillxnet I'm testing the rockon in a fresh vm without any other custom rockons. No rockons are installed yet.

@Tschaul
Copy link

Tschaul commented Jan 2, 2017

I get the same error with SELF_URL_PATH in the Tiny-Tiny-RSS image from https://github.com/clue/docker-ttrss

Heres my rockon:

    "TinyTinyRSS Test": {
        "container_links": {
            "ttrss-test": [
                {
                    "name": "db",
                    "source_container": "ttrss_postgres-test"
                }
            ]
        },
        "containers": {
            "ttrss-test": {
                "image": "clue/ttrss",
		        "tag": "latest",
                "launch_order": 2,
                "ports": {
                    "80": {
                        "description": "Tiny-Tiny-RSS WebUI port. Suggested default: 8888",
                        "host_default": 8286,
                        "label": "WebUI port",
                        "protocol": "tcp",
                        "ui": true
                    }
                },
                "environment":{
                    "SELF_URL_PATH":{
                        "description":"Url under which ttrss will be reachable. Enables browser integration if set.",
                        "label":"Self url path",
                        "index":1
                    }
                }
                
            },
            "ttrss_postgres-test": {
                "image": "postgres",
		        "tag": "latest",
                "launch_order": 1,
                "volumes": {
                    "/var/lib/posgresql/data/pgdata": {
                        "description": "Choose a Share for the PosgreSQL database of Tiny-Tiny-Rss.",
                        "label": "Database Storage",
                        "min_size": 1073741824
                    }
                },
                "opts":[
                    ["-e","PGDATA=/var/lib/posgresql/data/pgdata"],
                    ["-e","POSTGRES_USER=ttrss"]
                ]
            }
        },
        "version": "latest stable",
        "description": "Web-based news feed reader and aggregator",
        "icon": "https://tt-rss.org/gitlab/uploads/project/avatar/1/ic_launcher.png",
        "more_info": "<p>The default user credentials are admin/password. Make sure to change them after your first login.</p>",
        "ui": {
            "https": true,
            "slug": ""
        },
        "website": "https://tt-rss.org/"
    }
}

@daniel-illi
Copy link
Contributor Author

Another report with the same problem: https://forum.rockstor.com/t/softether-vpn-rockon/2637/3

@alazare619
Copy link

+1 Same exact issue i'm facing with softether that was reported on the forums.

@rcastberg
Copy link

I have the same issues as mentioned here, but only if i define container_links and a second container, If i remove those the issue disappears. Could this be related to checking the wrong docker image for environmental variables?

@anatox
Copy link

anatox commented Apr 3, 2017

A temporary fix is to duplicate environment json in all containers. It seems like rockstor tries to assign all the environment variables defined in rockon to all of its containers and fails there.

@flukejones
Copy link

This is still an issue and really needs to be fixed. The linked PR requires work.

@thailgrott
Copy link

thailgrott commented May 2, 2019

Noticed that this is still an issue when trying to combine some containers into a single Rockon to improve the user experience of this Rockon.

I wrote the following json definition:

{
    "Nginx Reverse Proxy (jWilder)": {
        "containers": {
            "default-site": {
                "image": "nginx",
                "tag": "latest",
                "launch_order": 1,
                "ports": {},
                "volumes": {
                    "/usr/share/nginx/html": {
                        "description": "Choose a volume for static files.",
                        "label": "Default htdocs"
                    }
                },
                "environment": {
                    "VIRTUAL_HOST": {
                        "description": "Specify the virtual host name to add to the nginx proxy.",
                        "label": "Virtual Host",
                        "index": 1
                    },
                    "NGINX_HOST": {
                        "description": "Specify the virtual host again for the default host.",
                        "label": "Default Host",
                        "index": 2
                    },
                    "LETSENCRYPT_HOST": {
                        "description": "Specify the virtual host name for letsencrypt - likely the same as the virtual host.",
                        "label": "Letsencrypt host",
                        "index": 3
                    },
                    "LETSENCRYPT_EMAIL": {
                        "description": "Specify an email for letsencrypt.",
                        "label": "Letsencrypt email",
                        "index": 4
                    },
                    "LETSENCRYPT_TEST": {
                        "description": "Specify using letsencrypt test or production certificate generation.",
                        "label": "Letsencrypt test(true/false)",
                        "index": 5
                    }
                },
                "opts": [
                    ["--net", "Internet_Proxy"],
                    ["-h", "default-site"]
                ]
            },
            "letsencrypt-nginx-proxy-companion": {
                "image": "jrcs/letsencrypt-nginx-proxy-companion",
                "tag": "latest",
                "launch_order": 2,
                "ports": {},
                "volumes": {
                    "/etc/nginx/certs": {
                        "description": "Choose a volume for generated certificates and keys.",
                        "label": "Certs n Keys"
                    },
                    "/etc/nginx/vhost.d": {
                        "description": "Choose a volume for the vhost.d config directory. ",
                        "label": "Virthosts configs"
                    },
                    "/usr/share/nginx/html": {
                        "description": "Choose a volume to write the challenge files. ",
                        "label": "lets encrypt challenge"
                    }
                },
                "opts": [
                    ["--net", "Internet_Proxy"],
                    ["-v", "/var/run/docker.sock:/var/run/docker.sock:ro"],
                    ["-h", "letsencrypt-nginx-proxy-companion"]
                ]
            },
            "nginx-reverse-proxy": {
                "image": "jwilder/nginx-proxy",
                "tag": "latest",
                "launch_order": 3,
                "ports": {
                    "80": {
                        "description": "The standard HTTP port the proxy server will listen to for requests.",
                        "host_default": 8082,
                        "label": "HTTP Port",
                        "protocol": "tcp",
                        "ui": false
                    },
                    "443": {
                        "description": "The standard HTTP port the proxy server will listen to for requests.",
                        "host_default": 8083,
                        "label": "HTTPS Port",
                        "protocol": "tcp",
                        "ui": false
                    }
                },
                "volumes": {
                    "/etc/nginx/htpasswd": {
                        "description": "Choose a volume for basic authentication files.",
                        "label": "Basic Auth"
                    },
                    "/etc/nginx/certs": {
                        "description": "Choose a volume for storing certificates and keys.",
                        "label": "Certs n Keys"
                    },
                    "/etc/nginx/vhost.d": {
                        "description": "Choose a volume for the vhost.d config directory. ",
                        "label": "Virthosts configs"
                    },
                    "/usr/share/nginx/html": {
                        "description": "Choose a volume to write the challenge files. ",
                        "label": "lets encrypt challenge"
                    }
                },
                "environment": {
                    "DEFAULT_HOST": {
                        "description": "The FQDN of the default website for proxied requests.",
                        "label": "Default host",
                        "index": 1
                    }
                },
                "opts": [
                    ["--net", "Internet_Proxy"],
                    ["-v", "/var/run/docker.sock:/tmp/docker.sock:ro"],
                    ["-h", "jw-nginx-reverse-proxy"],
                    ["--label", "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"]
                ]
            } 
        },
        "description": "jWilder nginx reverse proxy and letsencrypt companion",
        "ui": {
            "slug": ""
        },
        "volume_add_support": true,
        "website": "https://hub.docker.com/_/hello-world/",
	    "version": "1.0"
    }
}

Got the following message during the attempt to install:

Houston,` we've had a problem.

Invalid environment variable (LETSENCRYPT_TEST).

        Traceback (most recent call last):

File "/opt/rockstor/eggs/gunicorn-19.7.1-py2.7.egg/gunicorn/workers/sync.py", line 68, in run_for_one
self.accept(listener)
File "/opt/rockstor/eggs/gunicorn-19.7.1-py2.7.egg/gunicorn/workers/sync.py", line 27, in accept
client, addr = listener.accept()
File "/usr/lib64/python2.7/socket.py", line 202, in accept
sock, addr = self._sock.accept()
error: [Errno 11] Resource temporarily unavailable

@FroggyFlox
Copy link
Member

Hi again @KookyKrane,

I'm in the same situation as you, and actually it is when I tried to make a rock-on for the same project as you (Nginx Reverse Proxy (jWilder)) and this very issue here that I decided to get myself familiarized with the rock-on system and working on improving them. As described in the other issue you just commented on, I'm currently working on implementing docker networks (which will indirectly help with the current issue), and plan on having a look at the proposed PR (#1688) thereafter.

@anatox did a lot of work in this PR to fix this issue and there was a helpful discussion therein with @MFlyer so even though that part of the code changed quite a bit, building upon / adapting @anatox's work should be a very good start (to say the least).

@thailgrott
Copy link

Thx @FroggyFlox for pointing me to #1688 These patches are certainly motivating me to become more familiar with the system.

@FroggyFlox
Copy link
Member

A new forum user has encountered the same issue:
https://forum.rockstor.com/t/rock-on-install-fail-invalid-environment-variable-puid/7170

In addition to reinforcing the need for this issue to be fixed, let's try to update the forum thread above once this issue is closed.

@phillxnet
Copy link
Member

Re: @anatox 's #1588 (comment)

A temporary fix is to duplicate environment json in all containers. It seems like rockstor tries to assign all the environment variables defined in rockon to all of its containers and fails there.

Linking for context to @freaktechnik 's now closed-as-later-duplicate issue noted the same work-around #2202 (comment)

I have found a workaround, which is to have the same set of env vars for all containers. Curiously this repeats the fields in the env var input, but will only show them once in the end table. Of course this only works if there are no env var clashes.

@phillxnet phillxnet self-assigned this Aug 12, 2024
@phillxnet phillxnet changed the title Rockons: Invalid environment variabled Rockons: Invalid environment variable (...) Aug 12, 2024
@phillxnet
Copy link
Member

phillxnet commented Aug 12, 2024

Title updated to reflect latest testing code behaviour - where we also no-longer have as expressive a trace-back; i.e.:

Invalid environment variable (DB_ADMIN_PASSWORD).
NoneType: None

in the case of the linked draft PR in rockon-registry (rockstor/rockon-registry#379), the most recent reproducer expressing this now long-standing limitation regarding multi-container Rock-on definitions that also involve environment elements.

I have assigned myself to this issue informed by @anatox prior work #1688 in the context of many underlying changes in the interim to our Rock-ons code - represented in current testing branch.

@phillxnet
Copy link
Member

As-is, in current testing we pass the following from front-end to backend during a rock-on install command request POST (using the reference draft PR as reproducer):

[12/Aug/2024 14:07:06] DEBUG [storageadmin.views.rockon_id:109] install request with share_map={'bareos-backups': '/var/lib/bareos/storage', 'bareos-catalog-backup': '/var/lib/bareos', 'bareos-catalog': '/var/lib/postgresql/data', 'bareos-dir-config': '/etc/bareos', 'bareos-storage-config': '/etc/bareos', 'bareos-webui-config': '/etc/bareos-webui'}, port_map={'9100': 9100, '9101': 9101, '9103': 9103}, dev_map={}, cc_map={}, env_map={'POSTGRES_PASSWORD': 'ptt2', 'DB_ADMIN_PASSWORD': 'ptt2', 'DB_PASSWORD': 'ctt2', 'BAREOS_WEBUI_PASSWORD': 'wtt2'}
[12/Aug/2024 14:07:06] ERROR [storageadmin.util:45] Exception: Invalid environment variable (DB_ADMIN_PASSWORD).
NoneType: None
  • share_map={'bareos-backups': '/var/lib/bareos/storage', 'bareos-catalog-backup': '/var/lib/bareos', 'bareos-catalog': '/var/lib/postgresql/data', 'bareos-dir-config': '/etc/bareos', 'bareos-storage-config': '/etc/bareos', 'bareos-webui-config': '/etc/bareos-webui'}
  • port_map={'9100': 9100, '9101': 9101, '9103': 9103}
  • dev_map={}
  • cc_map={}
  • env_map={'POSTGRES_PASSWORD': 'ptt2', 'DB_ADMIN_PASSWORD': 'ptt2', 'DB_PASSWORD': 'ctt2', 'BAREOS_WEBUI_PASSWORD': 'wtt2'}

I.e. as per @anatox exposition (many years ago now) we lump all config info, for all containers, together. defeating env name duplication across containers in multi-container Rock-ons: even though it is independently detailed in the source JSON that is the Rock-on definition. This applies to all env categories here also, as per @freaktechnik 's following #2202 (comment) :

This also appears to apply to devices.

Which limits multi-container Rock-ons to per Rock-on unique share/port/cc/env definitions.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Aug 12, 2024
Add container dimension to Rock-on input environment during
Rock-on install wizard front-end. Cheery picked from anatox's
prior work on GitHub.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Aug 12, 2024
@phillxnet
Copy link
Member

Cherry picking front-end changes proposed by @anatox to add container info to the 'environment' input matrix, we get the following input from our Rock-on install front-end:

env_map={'85': {'POSTGRES_PASSWORD': 'ptt2'}, '86': {'DB_ADMIN_PASSWORD': 'ptt2', 'DB_PASSWORD': 'ctt2', 'BAREOS_WEBUI_PASSWORD': 'wtt2'}}

I.e. we now have a second matrix dimension associated with container id within our DB, preserving per-container env seperation.

  • '85': {'POSTGRES_PASSWORD': 'ptt2'}
  • '86': {'DB_ADMIN_PASSWORD': 'ptt2', 'DB_PASSWORD': 'ctt2', 'BAREOS_WEBUI_PASSWORD': 'wtt2'}

@phillxnet
Copy link
Member

phillxnet commented Aug 12, 2024

The addition, in the front-end, of our new environment matrix dimension has the following buggy artifact:

Rock-ons-install-wizard-env-summary-issue

I.e. we are summarising the 'new' top level environment matrix dimension.

[EDIT]:
Having completed the reproducer Rock-on install (BareOS-server-ser) we have confirmation of the cosmetic (front-end only) nature of the above: i.e. in the following (via spanner of installed draft PR reproducer Rock-on) we see all expected environment variables presented as entered:

Installed-Rock-ons-env-summary-ok

@phillxnet
Copy link
Member

phillxnet commented Aug 12, 2024

Remaining sub-issues:

[12/Aug/2024 15:57:57] DEBUG [storageadmin.views.rockon_id:109] install request with share_map={'bareos-backups': '/var/lib/bareos/storage', 'bareos-catalog-backup': '/var/lib/bareos', 'bareos-catalog': '/var/lib/postgresql/data', 'bareos-dir-config': '/etc/bareos', 'ba
reos-storage-config': '/etc/bareos', 'bareos-webui-config': '/etc/bareos-webui'}, port_map={'9100': 9100, '9101': 9101, '9103': 9103}, dev_map={}, cc_map={}, env_map={'85': {'POSTGRES_PASSWORD': 'ptt2'}, '86': {'DB_ADMIN_PASSWORD': 'ptt2', 'DB_PASSWORD': 'ctt2', 'BAREOS
_WEBUI_PASSWORD': 'wtt2'}}

@FroggyFlox
Copy link
Member

@phillxnet,
In case it helps, your second point reminds me of #2000

@phillxnet
Copy link
Member

phillxnet commented Aug 12, 2024

@FroggyFlox Thanks, that helps. We may have to tackle that separately, in a dedicated PR: still assessing how far to go with this issue that is predominantly related to env per container in multi-container Rock-ons after-all.

Confirming share entry as:

Resource type Name Mapped representation
Share bareos-backups /var/lib/bareos/storage
Share bareos-catalog-backup /var/lib/bareos
Share bareos-catalog /var/lib/postgresql/data
Share bareos-dir-config /etc/bareos
Share bareos-storage-config /etc/bareos
Share bareos-webui-config /etc/bareos-webui

With corresponding front-end passing to back-end the following (re shares):

[12/Aug/2024 16:40:55] DEBUG [storageadmin.views.rockon_id:109] install request with share_map={'bareos-backups': '/var/lib/bareos/storage', 'bareos-catalog-backup': '/var/lib/bareos', 'bareos-catalog': '/var/lib/postgresql/data', 'bareos-dir-config': '/etc/bareos', 'bareos-storage-config': '/etc/bareos', 'bareos-webui-config': '/etc/bareos-webui'}

And the resulting share mapping as reported by our 'spanner' on the resulting installed Rock-on:

Resource type Name Mapped representation
Share bareos-backups /var/lib/bareos/storage
Share bareos-catalog-backup /var/lib/bareos
Share bareos-catalog /var/lib/postgresql/data
Share bareos-storage-config /etc/bareos
Share bareos-storage-config /etc/bareos
Share bareos-webui-config /etc/bareos-webui

[EDIT See spin-off issue "Rock-on install wizard obfuscates share container info": #2886 ]

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Aug 12, 2024
Add back-end container dimension awareness re environment
passed from Rock-on install wizard. Rebase of earlier work
cheery picked/re-based from anatox's prior work on GitHub.
@phillxnet
Copy link
Member

phillxnet commented Aug 12, 2024

@FroggyFlox Currently looking at where/how to address:

  • Fix Rock-on pre-install wizard env summary re now listing containers dimension, not the Env's we entered.

[EDIT re the JS for our pre-install summary table:

this.table_template = window.JST.rockons_summary_table;

render: function() {
RockstorWizardPage.prototype.render.apply(this, arguments);
this.$('#ph-summary-table').html(this.table_template({
share_map: this.share_map,
port_map: this.port_map,
cc_map: this.cc_map,
dev_map: this.dev_map,
env_map: this.env_map
}));
return this;
},

<div class="progress">
<div class="progress-bar" role="progressbar" aria-valuenow="60" aria-valuemin="0" aria-valuemax="100" style="width: 100%;">
<span class="sr-only">60% Complete</span>
</div>
</div>
<div class="alert alert-warning">
<p>Please verify your input and click submit to start the installation.</p>
</div>
<div id="ph-summary-table"></div>

]

@FroggyFlox
Copy link
Member

Thanks a lot for taking care of all that.
Making these tables aware of the containers may be more work than the scope of this issue as I believe nothing besides maybe the display of the network customization options are container aware.

Maybe we can add the container in parenthesis to the value in the table for this issue (or something along those lines)?

@phillxnet
Copy link
Member

@FroggyFlox Agreed.
Re:

Maybe we can add the container in parenthesis to the value in the table for this issue (or something along those lines)?

I'm actually working on this approach currently as it goes. We may well have to improve the tables in time thought.

@phillxnet
Copy link
Member

phillxnet commented Aug 14, 2024

Existing multi-container Rock-ons in the rockon-registry repo:

Proposed multi-container Rock-on:

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Aug 14, 2024
Rock-on front-end install wizard does not collate container info
regarding Rock-on defined environment elements. Affects multi-container
Rock-ons only. Results in back-end failure to successfully assign
user-input environmental variables. Resolved by adding a container id
dimension to the environmental matrix created by the install wizard,
and updating the back-end Rock-on instantiator to take advantage
of this info. Previously a reverse engineering approach was taken:
which cannot work for example, where two containers, within the same
Rock-on definition, use the same environmental variable name.

Thanks to GitHub user @anatox - the majority of this PR is based on
their prior submission to the project - alas unmerged at the time.

Also adds
- Rock-on install debug log re front-end info received.
- Fixes, partly, resulting installer wizard summary feedback bug.
phillxnet added a commit that referenced this issue Aug 15, 2024
…ent-variable

Rockons: Invalid environment variable (...) #1588
@phillxnet
Copy link
Member

Closing as:
Fixed by #2887

Thanks again to @anatox for the original push (and PR) to approach this issue.

@phillxnet phillxnet added this to the 5.1.X-X Stable release milestone Aug 19, 2024
@freaktechnik
Copy link

freaktechnik commented Sep 12, 2024

I'm assuming I'll have to wait for this to be able to use a rockon with multiple containers (like this) that share shares? This used to work before #2064.

@phillxnet
Copy link
Member

@freaktechnik Hello again.
This issue was closed by the indicated PR. And an rpm containing this improvement was released via the testing channel under https://github.com/rockstor/rockstor-core/releases/tag/5.0.14-0 there was also another multi-container improvement added.

Form announcement of 5.0.14-0: https://forum.rockstor.com/t/v5-0-testing-channel-changelog/8898/27

Hope that helps. Not sure if it resolves your issue however. You may be able to share Shares between containers via a hard wired --volumes-from env var. Assuming you instruct the user with a must be on the owning container share name creation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
10 participants