Docker Hub build: https://hub.docker.com/repository/registry-1.docker.io/nasqueron/dev-workspace-go/builds/de644289-3bdc-4645-a06c-33be426a16e5
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
All Stories
Apr 6 2023
Apr 5 2023
Allow spaces in command name
Apr 4 2023
Add the actual script
Solved at Nasqueron by upgrading from 3.11.8 to 3.11.11.
One another occurence of this issue exists at https://ask.csdn.net/questions/7791019
Apr 3 2023
All provisioned correctly, but RabbitMQ management plugin now returns a 500 when we visit the exchange notifications.
Restore blank line before touch
$ salt dwellers state.sls roles/paas-docker/containers/rabbitmq […] ---------- ID: rabbitmq_orange-rabbit_root_password Function: cmd.script Result: True Comment: Command 'rabbitmq_orange-rabbit_root_password' run Started: 23:10:16.479077 Duration: 26894.14 ms Changes: ---------- pid: 3451250 retcode: 0 stderr: stdout: Deleting user "guest" ... Adding user "root" ... Done. Don't forget to grant the user permissions to some virtual hosts! See 'rabbitmqctl help set_permissions' to learn more. Setting tags for user "root" to [administrator] ... Setting permissions for user "root" in vhost "/" ... […]
Rebased. We tested deployment, we've a 401 error.
OK for notifications, and also for auth-grove.
Not sure at one moment the issue was solved, with opcache it's not easy to know, but just restart the server after creating the directories wasn't enough.
Sentry works like a charm: T1821
a
deploy and correct bug
Update ports information, switch from frontend container to directly use our nginx (let's avoid nginx to nginx), rebase
Apr 2 2023
everything is okey fauve have normally access on windriver
salt windriver state.apply roles/core/users
This deployment created the incident:
Works like a charm for Hound:
s/salt/__salt__
Update environment for the concept of feature flags. See https://help.penpot.app/technical-guide/configuration/. Bump PostgreSQL and Redis versions. Add Exim container so we can send mail.
A Kafka offset reset was needed:
If one of the two topics lag, we'd have:
$ notification-push --project Nasqueron --group ops --service monitoring --type autoheal.kafka_offset.start --text "Containers sentry_post_process_forwarder_ have an issue. Identified as Kafka offet issue. Starting automatic healing procedure." $ notification-push --project Nasqueron --group ops --service monitoring --type autoheal.kafka_offset.done --text "Containers sentry_post_process_forwarder_ automatic healing one. Containers should be alive."
We don't use uptime anymore.