ibmi-brunch-learn

Announcement

Collapse
No announcement yet.

Materialized Query Table (MQT)?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Materialized Query Table (MQT)?

    I have a new project where I'll be creating a table that contains data driven by our Order Entry system (specific groups of orders)... the table needs to also contain columns with summarized data (via ILE RPG procedure in a service program), and columns that will be maintained/changed via a user interface.

    I need to keep the rows and many of the columns "in sync" with our ever-changing O/E system, while allowing users to incorporate additional information that is not in our O/E system.

    I've heard of these MQT's... read this short introduction



    And this is where my knowledge of MQTs ends. I'm descent with ILE RPG and would consider myself a novice with SQL.

    I'm not sure if this is the correct approach for me... I'm not sure what the best route is for keeping this multi-user table in sync with our existing database.

    I welcome any suggestions

  • #2
    Re: Materialized Query Table (MQT)?

    Answer is: It depends. MQT's are tables that work like views. The functional difference is that a view has no data- it gets it on the fly. The MQT actually becomes a table with the data that is available when refreshed. So, you need to refresh it to get current data. If you need "live" data, MQT does not work well. Here is an example where I use it.
    I have a sales history files with about 75 files joined. It takes a few minutes for every query to run, and I run a few thousand queries every night. So it was taking everal hours to run. The MQT can refresh in about 35 minutes. The query over the MQT is seconds or less (its a single file query). Overall, this makes a big savings. This works for me because we post sales once a day, and I refresh the MQT after that. The result is any queries that run later in the day show the data "as of" the time the data was refreshed.
    So, yes it can work if you can live with the MQT being out of date between refreshes. If you can't, MQT is not the way to go.

    Comment


    • #3
      Re: Materialized Query Table (MQT)?

      sounds cool but could you just not use views instead with proper indexes?
      Hunting down the future ms. Ex DeadManWalks. *certain restrictions apply

      Comment


      • #4
        Re: Materialized Query Table (MQT)?

        Originally posted by arrow483 View Post
        Answer is: It depends. MQT's are tables that work like views. The functional difference is that a view has no data- it gets it on the fly. The MQT actually becomes a table with the data that is available when refreshed. So, you need to refresh it to get current data. If you need "live" data, MQT does not work well. Here is an example where I use it.
        I have a sales history files with about 75 files joined. It takes a few minutes for every query to run, and I run a few thousand queries every night. So it was taking everal hours to run. The MQT can refresh in about 35 minutes. The query over the MQT is seconds or less (its a single file query). Overall, this makes a big savings. This works for me because we post sales once a day, and I refresh the MQT after that. The result is any queries that run later in the day show the data "as of" the time the data was refreshed.
        So, yes it can work if you can live with the MQT being out of date between refreshes. If you can't, MQT is not the way to go.
        Well, I need to be real-time... at least "refreshed" each time a user accesses the file. So it sounds like the MQT may not be right for me.

        Originally posted by DeadManWalks View Post
        sounds cool but could you just not use views instead with proper indexes?
        You cannot update a view. I need to be able to add & update information related to the specific orders... I want to retain the same information. Not sure how to accomplish that with views.

        Comment


        • #5
          Re: Materialized Query Table (MQT)?

          You cannot update a view. I need to be able to add & update information related to the specific orders... I want to retain the same information. Not sure how to accomplish that with views.
          Not true!
          A view can be updated if it is updateable, i.e. consists only of a single table without any aggregation.
          A joined view also can be uptadte in conjunction with Instead Of Triggers.
          When updateing a joined view the accociated Instead Of Trigger is called which updates the (joined) base tables.

          An MQT is a table/physical file that is based on an SELECT statement, but must been refreshed manually by executing an REFRESH statement.
          Because an MQT is a table, indexes and can be built over this MQT.
          Even though a MQT can be updated, the base tables are NOT, i.e. the modifications get lot with the next REFRESH.

          A view is also based on a SELECT statement but does not need to be updated manually.
          Each time a query based on this view is executed, the stored SELECT statement is executed and the current data is returned.
          Sine views do not include any key informaiton additional indexes (Binary Radix Tree and Encoded Vector Indexes) are necessary for allowing fast access to the data.

          MQTs might be helpful for statistical reports, but SQL Views in composition with EVIs (with the INCLUDE clause) are in most cases the better choice.

          Birgitta

          Comment


          • #6
            Re: Materialized Query Table (MQT)?

            Thanks... Sounds like the MQT is not the right choice here. I've never heard of INSTEAD OF triggers. That's all new to me, and very interesting.

            But back to my original question... I would like to keep this as simple as possible and be able to retain historical information. So if I create a table (keyed the same as my O/E database) to contain the "extra" columns of information that a user needs to input, then use views to display the remaining information derived from the O/E system? Some of the columns must contain values derived from SQL or ILE RPG functions.

            I was considering rebuilding the entire table each time a user accessed it (adding new records, updating the derived column values, changing the status - open, shipped, canceled, etc.).

            I don't want to make this any more complicated than it has to be.

            Comment


            • #7
              Re: Materialized Query Table (MQT)?

              I would likely try a solution along these lines first:
              • Create the Order Entry extension table, but only include the additional new columns of detail data. i.e. I wouldn't include columns for the summary data, because summary data represents redundant information. Redundant data is sometimes necessary for performance, but you should performance test a solution without redundant data first.
              • Create an SQL view that joins together the order entry tables, including the new extension table, and present a detail view of the data.
              • Create an SQL view that provides a summary view of your data, presenting the summary calculations you need. By using a view, the data is never out of sync. If needed, this view can reference the first detail view created.
              • Use the combination of those two SQL views as needed in the application(s) that need them.
              • If at V7R1 or higher, consider creating one or more aggregate indexes matching (exactly) your summary calculations in the second summary view. Doing so will help performance of the summary view. See the INCLUDE keyword documentation for CREATE INDEX in the V7R1 SQL Reference manual. If you do this, make sure these index(es) do not have too negative performance impact when rows are inserted, updated, and deleted in the order entry tables upon which these index(es) are built.

              If performance was found to be acceptable, I'd likely run with that solution.

              Mike

              Comment


              • #8
                Re: Materialized Query Table (MQT)?

                Originally posted by Michael Jones View Post
                I would likely try a solution along these lines first:
                • Create the Order Entry extension table, but only include the additional new columns of detail data. i.e. I wouldn't include columns for the summary data, because summary data represents redundant information. Redundant data is sometimes necessary for performance, but you should performance test a solution without redundant data first.
                • Create an SQL view that joins together the order entry tables, including the new extension table, and present a detail view of the data.
                • Create an SQL view that provides a summary view of your data, presenting the summary calculations you need. By using a view, the data is never out of sync. If needed, this view can reference the first detail view created.
                • Use the combination of those two SQL views as needed in the application(s) that need them.
                • If at V7R1 or higher, consider creating one or more aggregate indexes matching (exactly) your summary calculations in the second summary view. Doing so will help performance of the summary view. See the INCLUDE keyword documentation for CREATE INDEX in the V7R1 SQL Reference manual. If you do this, make sure these index(es) do not have too negative performance impact when rows are inserted, updated, and deleted in the order entry tables upon which these index(es) are built.

                If performance was found to be acceptable, I'd likely run with that solution.

                Mike
                Thanks Mike... That is sort-of what I've settled on. But I'm still missing the piece where the rows get created in the Order Entry extension table. This new table should only contain records for specific types of orders created in our OE system. I thought about a trigger program (in RPG) to create the records initially. Also, some of the summary functions will be performed by RPG (the business logic already exists).

                One other general question - where do you typically store the "source" SQL statements for views and such? I've seen them stored in source members (QSQLSRC), on the PC, and in an RDi project.
                I'm accustomed to storing DDS and RPG in source members... but I have quite a few views created for DB2 Web Query that are stored (by RDi) on my PC.

                Comment


                • #9
                  Re: Materialized Query Table (MQT)?

                  We store all DDL (Data Definition Language) SQL Scripts in traditional Source Physical File Members, because all other sources are also stored in Source File Members.
                  For recreating the database objects we either execute a RUNSQLSTM in a green screen environment or open the source member with IBM i Navigator - Run an SQL Script and execute the statements after.

                  Birgitta

                  Comment


                  • #10
                    Re: Materialized Query Table (MQT)?

                    If your summary functions are fairly complex, and need to be calculated by the RPG business logic, then creating summary columns to hold those results is reasonable. On the other hand, if the summary calculations in the RPG business logic are fast enough, you may be able to avoid creating redundant summary columns (nice if possible), and create external SQL functions to act as wrappers around those RPG summary calculations. Said SQL function wrappers would be referenced in the summary SQL view.

                    If the RPG summary calculations are very simple, where they can be easily converted to SQL with low risk, I'd likely create SQL objects to replace them, and phase out the RPG calculations as time permits. You'll likely be able to get better performance out of SQL objects versus RPG.

                    If you do wind up with summary columns in a physical table, then I'd try triggers first to keep the data in sync.

                    For the work I do, for various reasons that amount to a lengthy discussion, I store SQL source in a custom database table in DB2 on the host (not a source physical file). I don't recommend you do that, because it requires custom tools, which I have, and you likely don't. Small shops getting started with SQL should probably just use QSQLSRC files, although I'd create them with a row length = 240, which is the maximum row length where SEU will still allow you to edit or display a source member (for the benefit of developers still using green screen SEU). If all your developers are OK with abandoning SEU (two thumbs up if so), make the row length longer. Shops that are just getting started with SQL should also consider storing their SQL source in stream files in the Integrated File System IFS. Note: I've not used the IFS for source myself, but the ability to organize source in hierarchical folders is attractive to me, versus the flat file system of QSYS.LIB. I do have concerns with using the IFS: What search tools are available for searching through source code? How fast do they run? How well does RDi work with IFS files? One reason of many as to why I store SQL source in the database itself, is I can use SQL queries to search through my SQL source code, and the results are lightning fast.

                    Where you can store SQL source is likely limited by the capabilities of your Change Control software, if you have any.

                    To ensure your SQL source is being backed up, and so it can be accessed by multiple developers, your source should be stored on a host system with a regular backup schedule!

                    Other options for storing and managing source code are open source repositories like GIT, SVN, etc.. I think those are a great choice for shops with very large projects that use a large number of developers.

                    Mike

                    Comment


                    • #11
                      Re: Materialized Query Table (MQT)?

                      Mike & Brigitta... thanks for the replies. It's always helpful to know what other folks do.

                      We are a small company (i.e. a small shop). I do most of the programming, but we do have another programmer. Both of us use RDi... I will use SEU/SDA when convenient. As for change management software (I had to laugh) - we ARE it. So organizing source is pretty important to me.

                      Right now, the bulk of my SQL simply creates views for web query. That source is stored on my PC in an RDI project directory - where I can edit and run (I copy this folder to our network which is backed up). There are some things I like and dislike about this functionality in RDi (namely when there is an error running a script and ZERO search capability). I'm not sure what I'll do with that "source" - I mean you can easily recreate the SQL from the object on the system using System i Navigator. Maybe I'm not using RDi correctly with SQL?

                      The remaining SQL is imbedded in ILE RPG programs. Also, much of the RPG business logic is actually SQLRPGLE service programs.

                      I've decided to use a physical table that get's updated by trigger programs. While some of the columns could be created using SQL views, I decided to store the data in the physical file for historical purposes. Once the transactions are complete, I want to lock them down so we can look back to see the work as it was completed.

                      Comment


                      • #12
                        Re: Materialized Query Table (MQT)?

                        I'm glad to hear you use RDi and SQL views. You're ahead of the curve compared to many IBM i shops !

                        You can easily recreate the SQL from the compiled SQL object.

                        That said, you want your small company to grow to be a large one, and in a large shop you especially don't want other programmers to have to generate the SQL source every time they want to inspect the source, to understand how something works, or to investigate if an SQL object source needs to be modified for a project. Generating the SQL source generates source that isn't as nicely formatted as one can do manually (assuming the developers format their SQL well). RDi has multiple ways of searching through source physical files on the host. IIRC, the iSphere plugin for RDi has a faster search tool for searching through host source physical files. Where ever you choose to store your source in the long term, you want multiple developers to share that same copy of the source, and you want to be able to search through the code in a fast manner. Fast code searches and highly organized source code are very important for productivity.

                        Maintaining replicated data for historical purposes (snapshot as of a point in time) is a very reasonable thing to do, especially when operational transaction tables tend to be periodically purged for performance reasons in most systems.

                        Comment

                        Working...
                        X