Teradata Cookbook
上QQ阅读APP看书,第一时间看更新

How to do it...

Insert is the topmost SQL statement that executes on the system. With a little tweak, we can optimize it to get better performance:

  1. Create a source table:
/**Source Table**/
CREATE SET TABLE TEST01.web_clicks
(Web_page_Number INTEGER,
Location_Number INTEGER,
Amount DECIMAL(10,2))
UNIQUE PRIMARY INDEX ( Web_page_Number );
  1. Create a target table with the same PI as the source table:
/**Target Table**/
CREATE SET TABLE TEST01.Web_clicks_PI
(Web_page_Number INTEGER, Location_Number INTEGER,
Salary_Amount DECIMAL(10,2))
UNIQUE PRIMARY INDEX (Web_page_Number);
  1. Run the explain of the query:
/**Explain part**/
1) First, we lock a distinct TEST01."pseudo table" for write on a RowHash to prevent global deadlock for TEST01.Web_clicks_Char_change_PI.
2) Next, we lock a distinct TEST01."pseudo table" for read on a RowHash to prevent global deadlock for TEST01.web_clicks.
3) We lock TEST01.Web_clicks_Char_change_PI for write, and we lock TEST01.web_clicks for read.
4) We do an all-AMPs MERGE into TEST01.Web_clicks_Char_change_PI from TFACT.web_clicks. The size is estimated with no confidence to be 25,382 rows. The estimated time for this step is 2.29 seconds.
5) We spoil the parser's dictionary cache for the table.
6) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.
-> No rows are returned to the user as the result of statement 1.
  1. Now we will do a insert.select data from the older one to the newer one:
INSERT INTO TEST01.Web_clicks_PI SELECT * FROM TEST01.web_clicks;