Tango database

I have 5.5.43

You can make a test as follows:

mysql> select device,attribute,name,count,value  from property_attribute_device where device='p09/motor/exp.31' and attribute='StepPositionController' ;
+------------------+------------------------+---------+-------+---------+
| device           | attribute              | name    | count | value   |
+------------------+------------------------+---------+-------+---------+
| p09/motor/exp.31 | StepPositionController | __value |     1 | 2132132 |
+------------------+------------------------+---------+-------+---------+
1 row in set (0.01 sec)

mysql> update property_attribute_device SET value='2132132' where device='p09/motor/exp.31' and attribute='StepPositionController';
Query OK, 0 rows affected (0.00 sec)
Rows matched: 1  Changed: 0  Warnings: 0

mysql> update property_attribute_device SET value='2132133' where device='p09/motor/exp.31' and attribute='StepPositionController';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql> update property_attribute_device SET value='2132133' where device='p09/motor/exp.31' and attribute='StepPositionController';
Query OK, 0 rows affected (0.00 sec)
Rows matched: 1  Changed: 0  Warnings: 0

And “Changed” value from

Rows matched: *  Changed: *  Warnings: *

is returned by mysql_affected_rows(…)

Cheers,
Jan

Jan,

Well, by adding the flag CLIENT_FOUND_ROWS in the mysql_real_connect() last parameter, the patch works fine.
This mysql_real_connect() call is used two times in method DataBase::create_connection_pool() in DataBaseUtils.cpp file

Cheers

Manu

Manu,

OK. That should help. I will change it.

Cheers,

Jan

Hi,

During the weekend I performed tests which compare

the original version of Database from Tango 8.1.2 to

patched one (without history of memorized attribute values).
In both cases I use the myisam engine.
On the first plot (standard setup) one can see two highest peaks:
for TSM backup and serverdump.sh
On the second plot an average execution time and peaks are much smaller.
The highest peak corresponds to the TSM backup.

Cheers,
Jan

Very nice result! Can we say the new modification solves the problem.

Andy

Hi Jan,

Could you tell us what you had to do to clean-up your database due to the multiple run of the the installation script?
Do you think we could provide a script to do this clean-up in a simple manner?

Thank’s in advance

Manu

Hi Manu,

It would be a good idea to provide such a script but it is not so trivial task.

For me the best solution is to clean all the history tables up and start every thing from the scratch.
Additionally, one has to remove duplicated rows with other tables like *history_id or device, server, access_address, access_device, property_class.

The problem is when the new history_id counter is added it starts from 0 which mess up all the id numbers.
Then a history table can contain older rows with higher id and younger rows with lower id.
Probably one can repair it changing ids with respect to timestamps from history tables.

Fortunately, in DESY we do not use to much history tables therefore Hannes could write some php scripts which fit to our needs.

In my DB I did it manually. I haven’t got any general script.

Bests,
Jan

Hello everybody.

I don’t know if it is off topic, but since you are talking about engine, I’d like to inform you that at Virgo we have moved the Tango database to the ndbcluster engine, in a clustered database.
There are only few bugs/incompatibilities between it and InnoDB/MyISAM in the Databased server with NULL values passed to a NOT NULL column, that can be easily fixed. While MyISAM and InnoDB use the default value when passing a NULL value to a NOT NULL column, ndbcluster does not have the same behavior and the query fails.

As a workaround, we’ve replaced all the occurrences of

"',updated=NULL,accessed=NULL"

with

"',updated=NOW(),accessed=NOW()"

inside the DataBase.cpp file. Do you know if there are other incompatibilities with the ndbcluster engine?

Regards,

Giovanni

Hi Giovanni,

this sounds very interesting and it would be useful to share the recipe on how to do this. I recommend you use the latest version of Database device server which has been modified as explained in the post in this thread. It will be part of the Tango 9 distribution but does not depend on Tango 9 so you can download it separately and install it in a Tango 8 system.

Andy

Andy,

The modified Db server in Tango 9 distrib requires a modified database schema due to new tables related to Pipe.
Therefore, I would not recommend to use it on a Tango 8 Db schema. It’s true that you can use it with Tango lib which is not 9
but you need Tango 9 db schema.

Cheers

Manu

Hi Manu and Emmanuel,

here it is another patch for your new myisam db version


diff --git a/cppserver/database/DataBase.cpp b/cppserver/database/DataBase.cpp
index 684d622..0dad4e3 100644
--- a/cppserver/database/DataBase.cpp
+++ b/cppserver/database/DataBase.cpp
@@ -6516,7 +6516,7 @@ void DataBase::create_update_mem_att(const Tango::DevVarStringArray *argin)
 //
 
     stringstream sql_query_stream;
-    sql_query_stream << "UPDATE property_attribute_device SET value=\"" << (*argin)[6]
+    sql_query_stream << "UPDATE property_attribute_device SET value=\"" << escape_string((*argin)[6])
                      << "\" WHERE device=\"" << tmp_device << "\" AND attribute=\"" << tmp_attribute
                      << "\" AND name=\"__value\" AND count=1";
     DEBUG_STREAM << "DataBase::PutAttributeProperty2(): sql_query " << sql_query_stream.str() << endl;

Without it I get the following error:


Failed to query TANGO database (error=You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'controllers": {"haso228k:10000/controller/dgg2ctrl/dgg2_exp_02": {"units": {"0":' at line 1)
.The query was: UPDATE property_attribute_device SET value="{"controllers": {"haso228k:10000/controller/dgg2ctrl/dgg2_exp_02": {"units": {"0": {"channels": 
...

Cheers,
Jan

or


@@ -6516,7 +6516,8 @@ void DataBase::create_update_mem_att(const Tango::DevVarStringArray *argin)
 //
 
     stringstream sql_query_stream;
-    sql_query_stream << "UPDATE property_attribute_device SET value=\"" << (*argin)[6]
+    string tmp_escaped_string = escape_string((*argin)[6]);
+    sql_query_stream << "UPDATE property_attribute_device SET value=\"" << tmp_escaped_string
                      << "\" WHERE device=\"" << tmp_device << "\" AND attribute=\"" << tmp_attribute
                      << "\" AND name=\"__value\" AND count=1";
     DEBUG_STREAM << "DataBase::PutAttributeProperty2(): sql_query " << sql_query_stream.str() << endl;

Also into the insert query should be added similar modification, i.e.

diff --git a/cppserver/database/DataBase.cpp b/cppserver/database/DataBase.cpp
index 8fea02e..461e866 100644
--- a/cppserver/database/DataBase.cpp
+++ b/cppserver/database/DataBase.cpp
@@ -6553,7 +6553,7 @@ void DataBase::create_update_mem_att(const Tango::DevVarStringArray *argin)
         sql_query_stream.str("");
         sql_query_stream << "INSERT INTO property_attribute_device SET device=\'"
                          << tmp_device << "\',attribute=\'" << tmp_attribute
-                         << "\',name=\'__value\',count=1,value=\'" << (*argin)[6] << "\',updated=NULL,accessed=NULL";
+                         << "\',name=\'__value\',count=1,value=\'" << tmp_escaped_string << "\',updated=NULL,accessed=NULL";
         DEBUG_STREAM << "DataBase::PutAttributeProperty(): sql_query " << sql_query_stream.str() << endl;
 
         sql_query = sql_query_stream.str();

Hi,

Just to make some updates. I’ve noticed that my database is blocked by any extensive disk usage of other
processes (e.g. TSM in my case). I’ve found the coincidence by making use of

iostat -m -x 1 -t

If such a process has to use the same disk as Tango Database one can consider


renice 10  -p `pidof dsmcad`
ionice -c3 -p `pidof dsmcad`

where dsmcad is an example of the blocking process.

Cheers,
Jan

Hi Jan,

Thank’s for the patches you sent some weeks ago. They are now included in trunk.

Cheers

Emmanuel

Good day to everyone,

I’m not sure whether I ask my question in the right topic, but It is related to the TANGO Database. At ELI-ALPS, we’d like to create an application which is able to import to and export from the TANGO database using the available PyTango API. We’ve seen (and I also tried) Cosylab’s very impressive and thorough work with importing records from csv files to the database, which can give us a big hint on how to proceed and what to prepare for such a program. However, before we go too much forward in the development, we’d like to investigate a little bit whether somebody in the TANGO community has created such a work, or just interested in the project. The goal of the application would be to import and export devices in the database along with configuring the Starter devices if its needed, therefore the final result would be a slice of a portable, easily configured TANGO system.

I wish you the bests,
Peter