In this config the Starter complains that
DBGetHostServersInfo
DBExportDevice
are not accessible in read-only mode (TAC is enabled)
After I added these to the AllowedCmds in TAC, the Starter starts with a final message “Ready to accept requests”, BUT I cannot access the Starter DS from outside - Astor still wants to start it via SSH and I cannot open the Starter Device in Jive either.
In any case, if I do a manual restart of the service it works perfectly fine.
My work-around is to do not allow the DBExportDevice CMD which set the Starter service as inactive an triggers a restart.
After having this “Ready to accept request” message when starting the Starter. When you click on the starter device in jive, what kind of device info do you get?
Does it tell you that it is exported?, where it is running, which PID?
Could it be that the TANGO_HOST environment variable is different when you start the Starter service (defined in /etc/tangorc apparently) and when you start astor?
interesting, we might be having the same issue…
We are running a couple of Linux VMs with Tango-Servers for each experimental area. The old ones are running Tango 9.2.5a, while the newest one set up recently is running Tango 9.3.4 (PyTango 9.3.3).
After a full reboot, tango.service and tango-starter.service are running fine, but the devices started by Starter cannot be accessed from outside (Jive list Database works fine)!
However, if I SSH onto the VM and access the devices locally in PyTango, everything works.
Issuing a “sudo systemctl restart tango-starter” solves the issue and I get a green light in Astor (running on a different workstation).
Since I am not manually starting anything but just restarting the service, I believe it cannot be related to ENV vars or so. Can this be related to the port binding of the socket or something more intricate?
I did not investigate further since it can be fixed manually and reboots are (hopefully) rare, but if this sounds similar I can look into it again.
PS: Below is the tango-starter.service definition which is identical to the other VMs where I never witnessed this issue.