Well, by adding the flag CLIENT_FOUND_ROWS in the mysql_real_connect() last parameter, the patch works fine.
This mysql_real_connect() call is used two times in method DataBase::create_connection_pool() in DataBaseUtils.cpp file
During the weekend I performed tests which compare
the original version of Database from Tango 8.1.2 to
patched one (without history of memorized attribute values).
In both cases I use the myisam engine.
On the first plot (standard setup) one can see two highest peaks:
for TSM backup and serverdump.sh
On the second plot an average execution time and peaks are much smaller.
The highest peak corresponds to the TSM backup.
Could you tell us what you had to do to clean-up your database due to the multiple run of the the installation script?
Do you think we could provide a script to do this clean-up in a simple manner?
It would be a good idea to provide such a script but it is not so trivial task.
For me the best solution is to clean all the history tables up and start every thing from the scratch.
Additionally, one has to remove duplicated rows with other tables like *history_id or device, server, access_address, access_device, property_class.
The problem is when the new history_id counter is added it starts from 0 which mess up all the id numbers.
Then a history table can contain older rows with higher id and younger rows with lower id.
Probably one can repair it changing ids with respect to timestamps from history tables.
Fortunately, in DESY we do not use to much history tables therefore Hannes could write some php scripts which fit to our needs.
In my DB I did it manually. I haven’t got any general script.
I don’t know if it is off topic, but since you are talking about engine, I’d like to inform you that at Virgo we have moved the Tango database to the ndbcluster engine, in a clustered database.
There are only few bugs/incompatibilities between it and InnoDB/MyISAM in the Databased server with NULL values passed to a NOT NULL column, that can be easily fixed. While MyISAM and InnoDB use the default value when passing a NULL value to a NOT NULL column, ndbcluster does not have the same behavior and the query fails.
As a workaround, we’ve replaced all the occurrences of
"',updated=NULL,accessed=NULL"
with
"',updated=NOW(),accessed=NOW()"
inside the DataBase.cpp file. Do you know if there are other incompatibilities with the ndbcluster engine?
this sounds very interesting and it would be useful to share the recipe on how to do this. I recommend you use the latest version of Database device server which has been modified as explained in the post in this thread. It will be part of the Tango 9 distribution but does not depend on Tango 9 so you can download it separately and install it in a Tango 8 system.
The modified Db server in Tango 9 distrib requires a modified database schema due to new tables related to Pipe.
Therefore, I would not recommend to use it on a Tango 8 Db schema. It’s true that you can use it with Tango lib which is not 9
but you need Tango 9 db schema.
here it is another patch for your new myisam db version
diff --git a/cppserver/database/DataBase.cpp b/cppserver/database/DataBase.cpp
index 684d622..0dad4e3 100644
--- a/cppserver/database/DataBase.cpp
+++ b/cppserver/database/DataBase.cpp
@@ -6516,7 +6516,7 @@ void DataBase::create_update_mem_att(const Tango::DevVarStringArray *argin)
//
stringstream sql_query_stream;
- sql_query_stream << "UPDATE property_attribute_device SET value=\"" << (*argin)[6]
+ sql_query_stream << "UPDATE property_attribute_device SET value=\"" << escape_string((*argin)[6])
<< "\" WHERE device=\"" << tmp_device << "\" AND attribute=\"" << tmp_attribute
<< "\" AND name=\"__value\" AND count=1";
DEBUG_STREAM << "DataBase::PutAttributeProperty2(): sql_query " << sql_query_stream.str() << endl;
Without it I get the following error:
Failed to query TANGO database (error=You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'controllers": {"haso228k:10000/controller/dgg2ctrl/dgg2_exp_02": {"units": {"0":' at line 1)
.The query was: UPDATE property_attribute_device SET value="{"controllers": {"haso228k:10000/controller/dgg2ctrl/dgg2_exp_02": {"units": {"0": {"channels":
...
Just to make some updates. I’ve noticed that my database is blocked by any extensive disk usage of other
processes (e.g. TSM in my case). I’ve found the coincidence by making use of
iostat -m -x 1 -t
If such a process has to use the same disk as Tango Database one can consider
I’m not sure whether I ask my question in the right topic, but It is related to the TANGO Database. At ELI-ALPS, we’d like to create an application which is able to import to and export from the TANGO database using the available PyTango API. We’ve seen (and I also tried) Cosylab’s very impressive and thorough work with importing records from csv files to the database, which can give us a big hint on how to proceed and what to prepare for such a program. However, before we go too much forward in the development, we’d like to investigate a little bit whether somebody in the TANGO community has created such a work, or just interested in the project. The goal of the application would be to import and export devices in the database along with configuring the Starter devices if its needed, therefore the final result would be a slice of a portable, easily configured TANGO system.