a historical database with an archiving frequency of up to 0.1 Hz
a short-term database that provides a few hours of retention, but with higher archiving frequency (up to 10 HZ)
a snapshot database
I have a number of questions:
Firstly, by “up to”, does that mean that when setting up a polling mechanism for an attribute, I can vary the speed at which attribute change events are triggered?
Secondly, are the 0.1 Hz and 10 Hz hard limits? In other words, would I be able to monitor a particular set of devices (or attributes) more frequently? In some cases, I would like to have a historical (permanent) high frequency archive of some devices. Would I be able to do that with the provided MySQL database setup, or should I write a custom database interface to my own database setup for that?
Of course an alternative would be to write a sort of high-level client code that flushes the data in the short-term database to another database (my own custom historical database), which is much easier than writing custom database code at a device server level. Any thooughts?
The official Tango Historical database is not event-based.
We are currently working on a new version of this system which is event-based.
Basically, we have special Tango device servers which are subscribing to events coming from the attributes you are interested in. These event subscriber device servers are storing data in the database of your choice when they receive new events.Currently we are testing MySQL and Cassandra but the design allows you to use something else if you wish too, a bit of development will be needed of course to write the layer to connect, read and write to another database.
If you use an event-based Historical database, you will be limited by the number of events you can send, receive and handle.
Emmanuel Taurel kindly directed me to a paper from ICALEPCS 2013 where tests showed that the maximum event rate for a double scalar attribute is about 95 KHZ. For spectrum or image attributes, it will be much less. This doesn’t mean you will be able on the event subscriber side (and database side) to cope with this high rate!
Your database will have to be able to handle a very high write load and the machine where your event subscriber is running will have to cope with all these events.
With the network bandwidth you were talking about in a previous post and with a database like Cassandra on SSD disks for instance, you might be able to reach very impressive performances. But it might be very expansive too to store so much data on SSDs! The advantage of SSDs is that you are no longer limited by the disk speed (but you might be with the size and price).
I think you should do some tests because Cassandra for instance is already able to achieve impressive write performances on spinning disks (but you will be limited by the disks speed).
Please be aware that there is the possibility to send events manually from your device servers too.
With the official version of the Tango historical database, I don’t know whether these numbers are hard limit…
Maybe someone from Soleil could answer to this question.
These limits are probably there to prevent the users from storing too much data in the database, in order to keep it manageable and not too big.
With the alternative version of the historical database we are working on, since it is event based, you should in theory be able to store at higher frequency, if you get the appropriate hardware (network, CPU, disks).
We haven’t done performance tests yet but we are already storing some attributes permanently at 1 Hz very easily since several months, with MySQL or Cassandra, on spinning disks.
[quote=“drea”]
Of course an alternative would be to write a sort of high-level client code that flushes the data in the short-term database to another database (my own custom historical database), which is much easier than writing custom database code at a device server level. Any thooughts?
Thanks for your help.[/quote]
You could use the new event-based historical database (Tango HDB++) too.
But it is still a prototype for the moment and under development.
The advantage would be that you could benefit from all the configuration, diagnostics and extraction tools we are currently developing.
Firstly, by “up to”, does that mean that when setting up a polling mechanism for an attribute, I can vary the speed at which attribute change events are triggered?
[/quote]
Dear Andrea,
The official Tango Historical database is not event-based.
We are currently working on a new version of this system which is event-based.
Basically, we have special Tango device servers which are subscribing to events coming from the attributes you are interested in. These event subscriber device servers are storing data in the database of your choice when they receive new events.Currently we are testing MySQL and Cassandra but the design allows you to use something else if you wish too, a bit of development will be needed of course to write the layer to connect, read and write to another database.
If you use an event-based Historical database, you will be limited by the number of events you can send, receive and handle.
Emmanuel Taurel kindly directed me to a paper from ICALEPCS 2013 where tests showed that the maximum event rate for a double scalar attribute is about 95 KHZ. For spectrum or image attributes, it will be much less. This doesn’t mean you will be able on the event subscriber side (and database side) to cope with this high rate!
Your database will have to be able to handle a very high write load and the machine where your event subscriber is running will have to cope with all these events.
With the network bandwidth you were talking about in a previous post and with a database like Cassandra on SSD disks for instance, you might be able to reach very impressive performances. But it might be very expansive too to store so much data on SSDs! The advantage of SSDs is that you are no longer limited by the disk speed (but you might be with the size and price).
I think you should do some tests because Cassandra for instance is already able to achieve impressive write performances on spinning disks (but you will be limited by the disks speed).
Please be aware that there is the possibility to send events manually from your device servers too.
With the official version of the Tango historical database, I don’t know whether these numbers are hard limit…
Maybe someone from Soleil could answer to this question.
These limits are probably there to prevent the users from storing too much data in the database, in order to keep it manageable and not too big.
With the alternative version of the historical database we are working on, since it is event based, you should in theory be able to store at higher frequency, if you get the appropriate hardware (network, CPU, disks).
We haven’t done performance tests yet but we are already storing some attributes permanently at 1 Hz very easily since several months, with MySQL or Cassandra, on spinning disks.
Hi Andrea and Reynald,
we sometimes used the hdb++ to store data at higher frequency, up to 1KHz with MySQL. The architecture allows to do it,
then you have to setup your back-end to cope with the data…
I thought that the forum post are redirect to tango mailing list. So my boss told me yesterday only that there are few questions for SOLEIL… So sorry for my late answer.
[quote=“drea”]I read that TANGO provides a number of databses:
a historical database with an archiving frequency of up to 0.1 Hz
a short-term database that provides a few hours of retention, but with higher archiving frequency (up to 10 HZ)
a snapshot database
I have a number of questions:
Firstly, by “up to”, does that mean that when setting up a polling mechanism for an attribute, I can vary the speed at which attribute change events are triggered?
[/quote]
So as Reynald said, Soleil official Archiving system is based on the polling system. So the frequency depend on the device beside. In our system, you must not to defined a polling mechanism on your attribute.
The archiving configuration is saved in archiving database (in the AMD table Archiving Mode Definition) :
HDB = historical data base
TDB = temporary data base (short term database)
So you have nothing to do in Tango database through Jive, we provied Mambo GUI to configure the archiving mode.
As you can see in the following screen shot : Your text to link here…
That’s means at Soleil, it is our users (operator or beam line staff) that are in charge of the archiving configuration, and consequently there are autonomous.
Yes, the limit is depend on your device so the hardware in your case. You can try as fast as possible, and see how your device can anwser to the reading request. At Soleil the soft limit is based on the experience..
But this soft limit can be set to 0.1 Hz in HDB. It is just checked in Mambo GUI… so in the device Archiver you can go fastest.
Concerning define a Set of attributes at differents rates. At Soleil our system is based on Archiving configuration. This configuration is saved in XML file with ac extension. It is defined a set of attributes that you want to archive, the mode and the rate of the configuration. So our users have a lot of AC files defined for their equipements our for a domains. (Vacuum, Insertion, PowerSupply …).
[quote=“drea”]
Of course an alternative would be to write a sort of high-level client code that flushes the data in the short-term database to another database (my own custom historical database), which is much easier than writing custom database code at a device server level. Any thooughts?
Thanks for your help.[/quote]
At Soleil, we have beam lines have also a fast historical mode. This is the equivalent of Historical but at Short term frequency. For that you just have to set the HdbArchiver device property isHighRate=true
I wish that my answer is useful for you. The project leader of the Archiving is Raphael Girardot at SOLEIL.
So do not hesitate to contact him, or me for more informations.
How to extract Tango Attributes from the Historical Database in Mambo? Whats the use of HdbExtractor ??
I can’t find any documentation on HdbExtractor.
Any help will be appreciated
I want to archive the tango attributes in Historical database at a period of 1 second. Whenever I create/modify AC, in the Attribute Properties section, I set period= 1 s but when i click on set, following message shows up: The specified archiving mode is invalid: archiving period too low !!( 1000ms < 10000ms ) I have already set the default value of HDB period as 1 second in Tools —>Options—>AC. How do I achieve it ?
I have another GUI running on different system. I want to extract/retrieve Tango attributes from HDB? How do I achieve it ?
What’s the use of HDBTDBArchivingWAtcher ?
How can I extract data from TDB and insert into HDB ?
Also till what period TDB stores the achived data ? Where can I configure it ?
How to extract Tango Attributes from the Historical Database in Mambo? Whats the use of HdbExtractor ??
I can’t find any documentation on HdbExtractor.
Any help will be appreciated
Thanks
Balkrishna[/quote]
In Mambo, if you have an AC already created, you can click on Transfer to VC button on the button of the application. Else, you should created a VC (visualization Configuration) to extract your date for Mambo.
So to do that, follow the instruction included in the doc Folder : Mambo_Manual_V2_1.
There is an other ways to extract your data :
Via the HdbExtractor or TdbExtractor.Both device have the same interface, you don’t need a VC files.
Use the command ExtractBetweenDates (click on description button in Jive ).
Argin description:
[attributeName,start date, end date]. dates format is DD-MM-YYYY HH24:MI:SS
Or use the command GetAttDataLastN (click on description button in Jive ).
Argin description:
The attribute’s name and the number of wished data
Via the [url=: http://wiki.synchrotron-soleil.fr/tikiwiki/tiki-index.php?page=How-browse-dowloaded-Nexus-files]DataBrowser application[/url], you should configure the database informations in the script :
defined environnement -DHDB_USER=hdb -DHDB_PASSWORD=hdb and modify CDMADictionaries\mappings\MamboSoleil\cdma_mambosoleil_config.xml and read the DataBrowserUserManual.pdf included in the doc folder.
I want to archive the tango attributes in Historical database at a period of 1 second. Whenever I create/modify AC, in the Attribute Properties section, I set period= 1 s but when i click on set, following message shows up: The specified archiving mode is invalid: archiving period too low !!( 1000ms < 10000ms ) I have already set the default value of HDB period as 1 second in Tools —>Options—>AC. How do I achieve it ?
[/quote]
I think that it is software securisation. In HDB you can go lower than 10 s. But I let my colleague answer to you for this part. Try to launch a archiving with TDB (let HDB unchecked at start of your AC creation).
See my answer before.
This device is used at SOLEIL, to check that archiving system is working. For example if the database is down. The device will check frequently if the database is available every 3hours. And also, if there is some attributes that are not running.
At Soleil, the TDB can be stored and never removed, (TDB long term). Because the data removing is done by our database adminstrator. It is manual. So in the reality your data will always there.
The scheme of both database HDB and TDB are exactly the same. So probably you can export data from TDB and import in TDB as it.
But, the solution for you, as I told you is to use HDB, but at hight rate see my answer to you last week.
Soleil should take in account this mode, in order to remove the software security in the GUI to go lower than 10s. You should ask this evolve to Raphaël Girardot.
[quote=“bchitalia2”]
5. Also till what period TDB stores the achived data ? Where can I configure it ?
Thanks
Balkrishna[/quote]
See my answer of last week, and check the user manual in the doc folder.
Hoping that it will help you.
Could you please confirm from Raphael and guide me how to archive tango attributes in HDB at a rate of 1 second ?
I have read the Mambo manual and i am unable to find where it is written that “till what period TDB stores the archived data”. So please could you confirm and tell.
VC is used for monitoring the values of an attributes after starting archiving. For the tango attributes i have archived in HDB, i can see the archived attributes list and it’s value gets updated after every 10 seconds ( graphs etc ). But for TDB after starting archiving, I can see the archived attributes list in VC but its value doesn’t get updated after clicking on refresh. So please could you tell whats the reason ?
As you said there are other ways to extract an data: They are
a.) You have to click on ExtractBetweenDates command in jive and write the argument.
but whenever i write the in the format mentioned by you in the above reply, it says wither attribute not found or the argument is invalid. Maybe somewhere i am doing mistake in writing argument. So could you please guide me. Suppose if attribute is Speed , Start date is : 26-05-2015 08:08:08 , End date is : 27-05:2015 15:30:30 How will you write the argument ?
b.) Facing same issue with GetAttDataLastN
DataBrowserUserManual.pdf is not mentioned in the doc folder of the Archiving Root. So could please provide me the link to download.
In different machine I have to extract the tango attributes from HDB. How could i achieve it ? You have given the reply above to this question but could you please elaborate as I am unable to do that.
Also in Bensikin tool, is there any way that snapshots can be uploaded to snapshot panel at regular interval rather than clicking on launch snapshot button again and again. I need snapshot to be uploaded to snapshot panel at a rate of 1 second. How can I achieve it here?
Thanks and Best Regards
Could you please confirm from Raphael and guide me how to archive tango attributes in HDB at a rate of 1 second ?
I have read the Mambo manual and i am unable to find where it is written that “till what period TDB stores the archived data”. So please could you confirm and tell.
[/quote]
HDB can’t be lower than 10 s
TDB can’t be lower than 100 ms
So if you want to test HDB first without a fast rate.
Test with 10s and then try to extract through a VC.
If you want to have a rate of 1s you can try with TDB temporary database. (The inconvenient is that the data will not kept as longer as historical)
When you create an new AC, check the Temporary AC checkbox. In this mode you will be able to set a 1s period. You have to select your attribute in the tree and click on Set Button. It will set the attribute in bold font in the tree.
Please check, the console at bottom of Mambo.
Is the connexion of TDB database works.
Is the attribute is OK ? (But if is not you must have some NULL in the database).
Normally, for TDB the attributes are first stored in a file define in the TDBArchiver DbPath property
Then the datas are exported to the database each ExportPeriod property (in ms) of the TDBArchiver (can be defined at the class level).
Perhaps, it is means that your data are not exported yet in the TDB database. So check your data file.
Is the Bulb is brown ? (It’s mean that the attribute is archived in TDB)
First check if youre attribute is well archived, in executing command GetCurrentArchivedAtt from the ExtractorDevice.
Then here is a working arguments : GetAttDataLastN “ANS-C09/VI/CALC-JPEN.1/mean”,“20” it will create a dynamic attribute that you can read after. The response of the command is the name of the new attribute and also the length.
ExtractBetweenDates “ANS-C09/VI/CALC-JPEN.1/mean”,“2015-05-27 18:08:00.184”,“2015-05-28 18:08:00.184”
You miss the millisecond in your sample…The avantage, is that the answer is the values.
When you say in different machine, you mean different Tango Database ? If it is so. I missunderstand you question. Could you explain me, the architecture .. Several Tango database ?, one Archiving database ? …
[quote=“bchitalia2”]
7. Also in Bensikin tool, is there any way that snapshots can be uploaded to snapshot panel at regular interval rather than clicking on launch snapshot button again and again. I need snapshot to be uploaded to snapshot panel at a rate of 1 second. How can I achieve it here?
Thanks and Best Regards
Balkrishna[/quote]
In Bensikin, there is no Snapshot automation. So today, you can use the SnapShotManager and use the command LaunchSnapShot with the context Id in argument.
We are working on AlarmTool (included in the Archiving package) to launch a Snapshot automatically on when Alarm event occured.
If you want to submit some Archiving evolve do not hesitate to do it in Tango mediatracker or asked it directly to Raphaël Girardot.
Is there any workaround to get fast rate of 1s in Hdb
I read in some ppt it stores for 30 days ,can we customize this ?
Regarding my issue with view configuration part in TDB,
Console at bottom of Mambo reads:
29-05-15 10:34:34.219 - DEBUG: Archiving successfully started
29-05-15 10:34:44.992 - INFO : extract from DB for p/q/r/WindSpeed took 49 ms
29-05-15 10:34:45.006 - INFO : extract from DB for p/q/r/Speed took 11 ms
29-05-15 10:34:45.016 - INFO : extract from DB for p/q/r/State took 9 ms
29-05-15 10:34:45.033 - INFO : extract from DB for p/q/r/Status took 16 ms
29-05-15 10:34:50.070 - INFO : extract from DB for p/q/r/WindSpeed took 25 ms
So no error here.
Also Tdb connection is ok , attribute is fine and data is also archived in TDB as indicated by brown bulb.
But when i do transfer to VC, attribute list is coming but no data updated (no graphs) but same thing i do for HDB , everything is perfectly fine.
Yes ,I am using different Tango databases and one archiving database.Tried using comma separated host entries in TANGO_HOST variable. Only the first tango database host in the TANGO_HOST list and its devices get list in the mambo AC, unable to get other hosts and their devices.
This is the error coming in DataBrowser GUI.
Jun 01, 2015 3:43:47 PM fr.soleil.data.service.LoggingSystemDelegate log
SEVERE: Failed to transmit p/q/r/speed history data to targets
java.lang.ClassCastException: [[D cannot be cast to fr.soleil.data.container.matrix.AbstractNumberMatrix
at fr.soleil.data.controller.NumberMatrixController$WildcardNumberAdapter.adaptSourceData(NumberMatrixController.java:78)
at fr.soleil.data.adapter.source.DataSourceAdapter.getData(DataSourceAdapter.java:47)
at fr.soleil.data.controller.DataTargetController.transmitDataToTarget(DataTargetController.java:115)
at fr.soleil.data.mediator.AbstractController.transmitSourceChange(AbstractController.java:542)
at fr.soleil.data.source.AbstractDataSource.notifyMediators(AbstractDataSource.java:106)
at fr.soleil.data.source.BufferedDataSource.updateData(BufferedDataSource.java:79)
at fr.soleil.data.source.BufferedDataSource.updateData(BufferedDataSource.java:70)
at fr.soleil.data.service.thread.DataSourceRefreshingThread.refreshData(DataSourceRefreshingThread.java:47)
at fr.soleil.data.service.thread.AbstractRefreshingThread.run(AbstractRefreshingThread.java:55)
[[D cannot be cast to fr.soleil.data.container.matrix.AbstractNumberMatrix
Jun 02, 2015 12:20:33 PM fr.soleil.data.service.LoggingSystemDelegate log
SEVERE: Failed to transmit sys/tg_test/1/wave history data to targets
fr.soleil.data.exception.DataAdaptationException: fr.soleil.comete.definition.data.adapter.ObjectToStringMapAdapter can’t adapt data of class java.util.ArrayList
at fr.soleil.comete.definition.data.adapter.ObjectToStringMapAdapter.generateDefaultException(ObjectToStringMapAdapter.java:386)
at fr.soleil.comete.definition.data.adapter.ObjectToStringMapAdapter.extractArraysFromUnusualData(ObjectToStringMapAdapter.java:369)
at fr.soleil.comete.definition.data.adapter.ObjectToStringMapAdapter.extractArrays(ObjectToStringMapAdapter.java:225)
at fr.soleil.comete.definition.data.adapter.ObjectToStringMapAdapter.adapt(ObjectToStringMapAdapter.java:95)
at fr.soleil.comete.definition.data.adapter.ObjectToStringMapAdapter.adapt(ObjectToStringMapAdapter.java:28)
at fr.soleil.data.adapter.source.DataSourceAdapter.adaptSourceData(DataSourceAdapter.java:69)
at fr.soleil.comete.definition.data.adapter.DataArrayAdapter.adaptSourceData(DataArrayAdapter.java:60)
at fr.soleil.comete.definition.data.adapter.DataArrayAdapter.adaptSourceData(DataArrayAdapter.java:24)
at fr.soleil.data.adapter.source.DataSourceAdapter.getData(DataSourceAdapter.java:47)
at fr.soleil.data.controller.DataTargetController.transmitDataToTarget(DataTargetController.java:115)
at fr.soleil.data.mediator.AbstractController.transmitSourceChange(AbstractController.java:542)
at fr.soleil.data.source.AbstractDataSource.notifyMediators(AbstractDataSource.java:106)
at fr.soleil.data.source.MultiDataSource$FakeMediator.transmitSourceChange(MultiDataSource.java:222)
at fr.soleil.data.source.AbstractDataSource.notifyMediators(AbstractDataSource.java:106)
at fr.soleil.data.source.BufferedDataSource.updateData(BufferedDataSource.java:79)
at fr.soleil.data.source.BufferedDataSource.updateData(BufferedDataSource.java:70)
at fr.soleil.data.service.thread.DataSourceRefreshingThread.refreshData(DataSourceRefreshingThread.java:47)
at fr.soleil.data.service.thread.AbstractRefreshingThread.run(AbstractRefreshingThread.java:55)
fr.soleil.comete.definition.data.adapter.ObjectToStringMapAdapter can’t adapt data of class java.util.ArrayList
As a matter of fact, there is a workaround. But first, you have to understand that HDB is expected to store data for ever, which is why it was decided to put this kind of limit. DB administrator won’t like to have their DB storage growing too fast, and users have to ask themselves whether it is really useful to store forever some data at higher frequence rate than every 10s. That being said, the workaround is to use the property “shortPeriodAttributes” in the HdbArchiver class. This property must be written that way:
“Attribute complete name,minimum period in seconds”.
For example:
"
tango/tangotest/1/short_scalar_ro,2
tango/tangotest/1/double_scalar_ro,1
"
So, for these attributes, you define the maximum archiving frequency, which can’t be higher than 1Hz (every second) (and no, there is no further workaround for this limit)
Yes, you can and in fact you MUST. Let me explain :
TDB cleaning is done by TdbCleaner, which is for now only available for linux.
This cleaner should be registered in crontab, to be regularly executed (This is the part for which I wrote “you MUST”).
To know which data is considered as too old and must be cleaned, TdbCleaner reads the property “RetentionPeriod” in TdbArchiver class. This property must be filled that way: “time unit/value”, where time unit can be “minutes”, “hours” or “days”, and value a strictly positive integer. If not this property is not filled, the default value is used (3 days, which represents “days/3”). So, data older than RetentionPeriod will be deleted by TdbCleaner evry time it is executed.
[quote]Also Tdb connection is ok , attribute is fine and data is also archived in TDB as indicated by brown bulb.
But when i do transfer to VC, attribute list is coming but no data updated (no graphs) but same thing i do for HDB , everything is perfectly fine.[/quote]
This may be because your data was not exported to database yet. TDB does not work exactly the same way as HDB. As TDB accepts a higher archiving rate than HDB, data are first writen in files by archiver (the deletion of these files must be done by administrator, using for example a script registered in crontab), before being exported to database. To ensure viewing your data as soon as possible in mambo, you have to check some option: In Mambo, go to “Tools/Options”. There, go to “VCs” tab and select “yes” in “Force TDB export on View”. This will take more time to view your data, as before data extraction, mambo will ask TdbArchivers to export their data to TDB instead of letting them doing this automatically at their own rate.
For HDB, data are directly written in database.
Mambo was not designed to work with multiple tango hosts at the same time. The coma separator is more likely interpreted to consider that if the first one does’nt answer, try second one.
Well, this is a bug. We will check it and try to find a correction.
Selected “Yes” in “Force TDB export on view”, still not able to view the data. Anyother workaround ??
This is a very basic question as i don’t know anything about crontab. How to register TdbCleaner in crontab ?
Ok got it. Thanks for the reply.
Here are my few other queries:
Following error is coming while using alarmtool GUI
Rule creation error
Cannot set rule to database
Unknown column ‘custom’ in 'field list.
Rule creation error
Cannot set rule to database
Unknown column ‘textTalkerEnabled’ in 'field list.
3 ./ AlarmManager 1
This read at the bottom of console
INFO Thread-12-f.s.a.a.a.i.s.i.LocalManager.registerNewArchivers:105 - Register Archiver archiving/alarmdb/alarmdbarchiver.01_01
Do i have to register alarm archiver somewhere ? and if yes where ?
4. In Databrowser application, is it possible to see the values of previously archived data ?
I will answer you for the archiving part, and let Katy answer for the rest.
This probably means your TDB archivers do not even write the files. So first, check where they write files. For that, take a look at their “DbPath” and “DsPath” properties. Maybe the folder path is not good.
Before I answer about DataBrowser and AlarmTool and let Raphaël answer about Archiving tools (Mambo). Let I remind you the global architecture. Because the questions comes all at the same level, but each item Alarm, AlarmDb, Archiving DB and DataBrowser are not all in the same project. By the way it would be easier to answer if you post your question about DataBrowser and AlarmTool in a separated topics…
So, first I have already answer to Drea at the beginning of the Topics about HDB, TDB and SNAP. So those 3 projects are included in Archiving package that you have dowloaded. Raphaël is in charge of this package.
The ADB that come with MACARENA is give up and replaced AlarmTool project that still managed by me for the moment, I will give the project to Raphaël very soon. But we have a very busy, so I don’t have the time yet to transfer the project.
Then AlarmTool, as I said before, it is still on my responsability. And as I said to sudeep in the topics about it in the topics about it, you can dowload the project on this link. The futur Archiving will be fixed with my modifications that comes from Sudeep comments. So would you try this package and read the doc folder to know How to Install AlarmTool.
At the end DataBrowser project is a completely different tools, that allows you to browser any data.(Tango, Nexus, HD5, Archiving). So yes you could use to read archived data from HDB or TDB.
For that as I told you in the same topics here. You have to configured the database connexion through the databrowser script delivered in the package.
I have question in return, you have a Exception when you trying to open a tango attributes (wave..or else)
Could you explain me, what are you trying to open, in order that I can reproduce the error on my computer.
I will try to fixe your problem. It will help me if you send me screenshot. And could you post your problem in a separated topic about DataBrowser.
[quote]I have question in return, you have a Exception when you trying to open a tango attributes (wave..or else)
Could you explain me, what are you trying to open, in order that I can reproduce the error on my computer.
I will try to fixe your problem. It will help me if you send me screenshot. And could you post your problem in a separated topic about DataBrowser.
[/quote]
I was not able to set the rule in alarmdatabase initially. Now i am able to do so with the latest version you provided.
Also I will create different thread for discussion of databrowser and alarm if i came across any doubts.
Thanks and Best Regards
Balkrishna
I tried this method, its running perfectly fine. Just one more request, I have around 100 attributes to be archived at 1s in HDB.
So do i have to type all attributes in the HdbArchiver class property. I mean like
For ex:
1.“tango/tangotest/1/short_scalar_ro,1
2. tango/tangotest/1/double_scalar_ro,1
…
…
…
100.tango/tangotest/1/boolean,1”
Isn’t there any shortcut method to do so ? Just asked for curiosity, if it can be done.