I haven't used trigger files before, but I wondered what people think of the method below, instead of using data queues.
I have about 20 files that I need to synchronize with their equivalents in a MySQL database. I already have a PHP script that reads all the data and creates each MySQL table in full, but we really need something that will keep the tables in sync and just process the changes.
Obviously the trigger programs need to be fast, since the I/O operation is not completed until the trigger program is complete.
I was thinking to use an RPG trigger program (ideally a single one which would work for each table), which would read the trigger buffer, and write the file, action and key fields to a logfile, rather than to a data queue.
The logfile would have fields for the file, action and the key fields (5 I think is the max keys on any file), the key values stored as text, and it would also have a timestamp field that would be populated once the action had been processed.
I was going to use a scheduled PHP script to read the logfile, and for each record it would perform the action on the file in MySQL. By using PHP I should be able to avoid dealing with alpha/numeric conversion and field lengths, and can hopefully have just 1 script to deal with any file.
The logfile will serve as an audit trail of all file updates, and if anything goes wrong with the program that processes it, in particular if it tries to update MySQL on a server that can't be contacted, the change requests will remain until everything is working again. Actioned logfile entries will be purged after a given time to keep the file small.
I am assuming that PHP can't read data queues, so by writing to a logfile instead, that widens the scope for how to process the data.
Any thoughts on this as against other methods that people have used?
I have about 20 files that I need to synchronize with their equivalents in a MySQL database. I already have a PHP script that reads all the data and creates each MySQL table in full, but we really need something that will keep the tables in sync and just process the changes.
Obviously the trigger programs need to be fast, since the I/O operation is not completed until the trigger program is complete.
I was thinking to use an RPG trigger program (ideally a single one which would work for each table), which would read the trigger buffer, and write the file, action and key fields to a logfile, rather than to a data queue.
The logfile would have fields for the file, action and the key fields (5 I think is the max keys on any file), the key values stored as text, and it would also have a timestamp field that would be populated once the action had been processed.
I was going to use a scheduled PHP script to read the logfile, and for each record it would perform the action on the file in MySQL. By using PHP I should be able to avoid dealing with alpha/numeric conversion and field lengths, and can hopefully have just 1 script to deal with any file.
The logfile will serve as an audit trail of all file updates, and if anything goes wrong with the program that processes it, in particular if it tries to update MySQL on a server that can't be contacted, the change requests will remain until everything is working again. Actioned logfile entries will be purged after a given time to keep the file small.
I am assuming that PHP can't read data queues, so by writing to a logfile instead, that widens the scope for how to process the data.
Any thoughts on this as against other methods that people have used?
Comment