Sunday 11 September 2011

Teradata13.10 Features::Golden Gate, Data Mover,Temporal Table/Query/Database Etc...


In this post topics covered are Golden Gate ,Data mover and Temporal tables/Queries which are more important at present.

GOLDEN GATE

What is Golden gate?

GoldenGate is an independent software vendor that provides a platform for real-time synchronization and management of transactional data across the enterprise.


Need of Golden Gate


GoldenGate captures, routes, enhances and applies mission-critical transactional data in real time with guaranteed integrity across a wide variety of data sources. Hundreds of large businesses worldwide in industries such as banking, financial services, healthcare, retail and telecommunications have leveraged GoldenGate to address highly visible and fundamental data challenges including:
  • Protecting and enhancing data availability
  • Sharing, managing and integrating data across a variety of platforms and databases
  • Accessing constantly increasing volumes of data on the appropriate platforms
  • Ensuring production availability even while upgrading hardware, software and applications
  • Meeting regulatory compliance issues
GoldenGate allows businesses to "cross that real-time bridge" enabling them to significantly mitigate risk, reduce costs and increase revenues while evolving toward a real-time enterprise.
Product/Service Description
GoldenGate Software's Transactional Data Management (TDM) technology allows customers to capture, route and deliver high volumes of transactional data with only sub-second latency across heterogeneous environments, including Teradata as a source and/or target.

Teradata customers use the GoldenGate TDM platform to implement several solutions that enable significant improvements to business intelligence and reporting, and additionally provide the highest availability for the data warehouse particularly in an active enterprise data warehouse situation.
  • Active Load for real-time data feeds
  • Active Availability for improved uptime and recovery
GoldenGate TDM technology should be evaluated by Teradata customers who are seeking to improve latency of data acquisition, eliminate batch windows, synchronize two Teradata warehouses for business continuity improvements and/or where the source system is regularly processing an increasing volume of data transactions.

Joint Solutions Between GoldenGate and Teradata
GoldenGate and Teradata Active Enterprise Intelligence relies upon the deployment of a Teradata Active Data Warehouse that can help frontline, operational users make smarter decisions faster. Essential to this is the ability to deliver uninterrupted access to the most current data possible. GoldenGate provides Teradata customers with solutions for the Teradata warehouse:
  • Active Load for real-time data feeds
  • Active Availability for improved uptime and recovery
Benefits of GoldenGate and Teradata Working Together
  • Continuously capture and deliver changed data in real time between source databases and the Teradata warehouse
  • Eliminate dependencies on batch windows
  • Apply transformations inside the Teradata database engine
  • Easily deploy, modify and scale GoldenGate to handle increasing data volumes and throughput
  • Implement Dual-Active Teradata for high availability with bi-directional synchronization
                               
Teradata Data Mover

Easy, Effective Data Movement


Moving data from one system to another is a key part of your analytical ecosystem operations. Well-planned, monitored, and managed data movement is essential to effective operation of your overall ecosystem.
Whether your goal is to complete a onetime copy of data from your enterprise data warehouse (EDW) to your test system or to regularly and continuously synchronize a dual system environment, you need confidence that the data will be in the right place at the right time while causing the smallest possible impact on the data warehouse’s mission of serving your business.
The decision about the best way to copy data between Teradata systems depends on many factors, including the data model and indexes present, other workloads in your system, and additional load jobs underway
What you need is a solution that automates the data movement process, a solution that also accommodates the many situations found within your comprehensive analytical ecosystem. And that’s exactly what Teradata® Data Mover delivers to you and your business: all the supporting capabilities you need to easily, effectively – and affordably –meet your data movement demands.
Intelligent Selection
Teradata Data Mover is a utility designed specifically to copy data and objects such as tables and statistics from one Teradata system to another. It takes advantage of the built-in underlying technologies already in your Teradata system. And it understands the available techniques you have for extracting data from or loading data into your Teradata Database, as well as our various load utilities, backup/restore products, and the JDBC driver.
Leverage Existing Technologies
Teradata Data Mover leverages all of the established data loading protocols and technologies to meet your needs and environment best.
You can use the utility for:
> Regular periodic tasks including populating a dependent datamart from an EDW.
> Special one-time tasks such as moving data to a test system.
The utility also works with every member of our Teradata Purpose-Built Platform Family.
In addition, it supports Teradata Database V2R6.1 and higher and runs
on a Teradata Managed Server for high availability and comprehensive monitoring.

http://2.bp.blogspot.com/-aex7z5ewwEo/TkTD_QcGvXI/AAAAAAAAADs/FDREAkGzPPY/s320/Fig1datamove.png
An End-to-End Management Solution
You can also integrate data movement within your comprehensive ecosystem management and load jobs and automate
Teradata Data Mover for your production execution needs.
That’s because the utility can be tightly integrated with Teradata Multi-System Manager for end-to-end management of changes to your environment.
The utility works with Teradata Multi-System Manager to support these commands and status messages:
> Send event for the start of a job.
> Send event for end of a job.
> Send event for a job in progress.
And Teradata Data Mover can be integrated with load processes to copy data to a second system.
That means you can copy data from an EDW to a data mart as soon as the data are loaded into the EDW.
Teradata Data Mover also provides safeguards to force the direction of data movement and eliminate
any possibility of copying data in the wrong direction. Teradata Data Mover not only copies data, but reduces operational cost and complexity and improves your overall data management capabilities by copying an array of objects, including Tables, PPI tables, Global Temp Tables, Tables with LOB columns, Statistics, Triggers, Join indexes, Hash indexes, Tables with UDT columns, and Users.
Swift, Simple Data Movement
Your analytical ecosystem uses multiple Teradata systems, each with different roles.
For example, you may have a Teradata Active Enterprise Data Warehouse serving as an EDW, a Teradata Extreme Data Warehouse for deep history web click analysis, and a Teradata Data Mart Appliance as a test and development system.
As part of your critical ongoing maintenance, you must be able to move or copy data from one system to another.
Teradata Data Mover simplifies data movement and enables process automation, control, and monitoring. It also provides a point-and-click user interface to enable powerful data movement commands.

http://3.bp.blogspot.com/-sW0rMdfNUmk/TkTGFk8WmHI/AAAAAAAAAD0/x1vXQkt10jw/s320/datamove.png
Flexible, Powerful, Easy-to-Use
The value of any analytical system rests heavily on having the right data in the system.
In fact, the very success of your enterprise can depend on the smooth operation of the data movement.
That’s why it’s so vital that data be moved quickly and reliably between your Teradata systems.
And you can add more value – and reduce operational costs – when you use production quality tools to create standardized jobs, monitor status and progress, and handle exceptions.
Teradata Data Mover offers you a complete solution to improve your processes and reduce the cost involved in maintaining the analytical ecosystem.
Take advantage of its scope of data movement technologies and breadth of control mechanisms.
Whether it’s event-driven for automatically moving newly loaded data or ad-hoc DBA-initiated copies for exception processing, Teradata Data Mover gives you the power and flexibility to strengthen your analytical environment and deliver even more value from your data.

What is a Temporal Database Definitions::
  • Temporal – the ability to store all historic states of a given set of data (a database row), and as part of the query select a point in time to reference the data. Examples:
    • What was this account balance (share price, inventory level, asset value, etc) on this date?
    • What data went into the calculation on 11/30/07, and what adjustments were made in 2Q08?
    • On this historic date, what was the service level (contract status, customer value, insurance policy coverage) for said customer?
  • Three Types of Temporal Tables
    • Valid Time Tables
      • When a fact is true in the modeled reality
      • User specified times
    • Transaction Time Tables
      • When a fact is stored in the database
      • System maintained time, no user control
    • Bitemporal Tables
      • Both Transaction Time and Valid Time
  • User Defined Time
    • User can add time period columns, and take advantage of the added temporal operators
    • Database does not enforce any rules on user defined time columns
Temporal Query
Provide a list of members who were reported as covered on Jan. 15, 2000 in the Feb. 1, 2000 NCQA report, with names as accurate as our best data shows today.
Without Temporal Support
select member.member_id ,
member.member_nm
from edw.member_x_coverage coverage
,edw.member
where coverage.member_id = member.member_id
and coverage.observation_start_dt <= '2000-02-01'
and (coverage.observation_end_dt > '2000-02-01'
or
coverage.observation_end_dt is NULL)
and coverage.effective_dt <= '2000-01-15'
and (coverage.termination_dt > '2000-01-15'
or coverage.termination_dt is NULL)
With Temporal Support
SELECT member.member_id, member.member_nm
FROM edw.member_x_coverage
VALIDTIME AS OF DATE ‘2000-01-15’ AND
TRANSACTIONTIME AS OF DATE ‘2000-01-01’ ,edw.member
WHERE member_x_coverage.member_id = member.member_id;
Temporal Update – BiTemporal Table
Current valid time, current transaction time Query Jeans (125,102) are sold today (2005-08-30)
With Temporal Support
UPDATE objectlocation
SET LOCATION = ‘External’
WHERE item_id = 125
AND item_serial_num = 102
Without Temporal Support
INSERT INTO objectlocation
SELECT item_id, item_serial_num, ‘External’, CURRENT_TIME, END(vt), CURRENT_TIME, ‘Until_Closed’
FROM objectlocation
WHERE item_id = 125 AND item_serial_num = 102
AND BEGIN(vt) <= CURRENT_TIME
AND END(vt) > CURRENT_TIME
AND END(tt) = ‘Until_Closed’;
INSERT INTO objectlocation
SELECT item_id, item_serial_num, location, BEGIN(vt), CURRENT_TIME, CURRENT_TIME, ‘Until_Closed’
FROM objectlocation
WHERE item_id = 125 AND item_serial_num = 102
AND BEGIN(vt) <= CURRENT_TIME
AND END(vt) > CURRENT_TIME
AND END(tt) = ‘Until_Closed’;
UPDATE objectlocation
SET END(tt) = CURRENT_TIME
WHERE item_id = 125 AND item_serial_num = 102
AND BEGIN(vt) <= CURRENT_TIME
AND END(vt) > CURRENT_TIME
AND END(tt) = ‘Until_Closed’;
INSERT INTO objectlocation
SELECT item_id, item_serial_num, ‘External’, BEGIN(vt), END(vt), CURRENT_TIME, ‘Until_Closed’
FROM objectlocation
WHERE item_id = 125 AND item_serial_num = 102
AND BEGIN(vt) > CURRENT_TIME
AND END(tt) = ‘Until_Closed’
UPDATE objectlocation
SET END(tt) = CURRENT_TIME
WHERE item_id =125 AND item_serial_num = 102
AND BEGIN(vt) > CURRENT_TIME
AND END(vt) = ‘Until_Closed’ 
Moving Current Date in PPI
  • Description
    • Support use of CURRENT_DATE and CURRENT_TIMESTAMP built-in functions in Partitioning Expression.
    • Ability to reconcile the values of these built-in functions to a newer date or timestamp using ALTER TABLE.
      • Optimally reconciles the rows with the newly resolved date or timestamp value.
      • Reconciles the PPI expression.
  • Benefit
    • Users can define with ‘moving’ date and timestamps with ease instead of manual redefinition of the PPI expression using constants.
      • Date based partitioning is typical use for PPI. If a PPI is defined with ‘moving’ current date or current timestamp, the partition that contains the recent data can be as small as possible for efficient access.
    • Required for Temporal semantics feature – provides the ability to define ‘current’ and ‘history’ partitions.
Time Series Expansion Support
  • Description
    • New EXPAND ON clause added to SELECT to expand row with a period column into multiple rows
      • EXPAND ON clause allowed in views and derived tables
    • EXPAND ON syntax supports multiple ways to expand rows
  • Benefit
    • Permits time based analysis on period values
      • Allows business questions such as ‘Get the month end average inventory cost during the last quarter of the year 2006’
      • Allows OLAP analysis on period data
    • Allows charting of period data in an excel format
    • Provides infrastructure for sequenced query semantics on Temporal tables .
Time series Expansion support
  • What will it do?
    • Expand a time period column and produce value equivalent rows one each for each time granule in the period
      • Time granule is user specified
      • Permits a period representation of the row to be changed into an event representation
    • Following forms of expansion provided:
      • Interval expansion
        • By the user specified intervals such as INTERVAL ‘1’ MONTH
      • Anchor point expansion
        • By the user specified anchored points in a time line
      • Anchor period expansion
        • By user specified anchored time durations in a time line
Geospatial Enhancements
  • Description
    • Enhancements to the Teradata 13 Geospatial offering drastically increasing performance, adding functionality and providing integration points for partner tools
  • Benefits
    • Increased performance by changing UDF’s to Fast Path System functions
    • Replace the Shape File Generator client tool (org2org) with a stored procedure for tighter integration with the database and tools such as ESRI ARCGIS
    • Provide geodetic distance methods – SphericalBufferMBR()
    • WFS Server provides better tool integration support for MapInfo and ESRI products
ESRI ArcGIS Connecting to Teradata via Safe Software FME

http://4.bp.blogspot.com/-lzRETBKw6kA/Tl8DQ4EDG8I/AAAAAAAAAEU/2geDaZ8eZBE/s320/ESRI%2BARCGIS.png

  • FME connection in
  • ArcView
  • 2. Connect to Teradata via TPT API
  • Select Teradata
  • tables for ArcView analysis
· Algorithmic Compression
  • Description
    • Provide the capability that will allow users the option of defining compression/decompression algorithms that would be implemented as UDFs and that would be specified and applied to data at the column level in a row. Initially, Teradata will provide two compression/decompression algorithms; one set for UNICODE columns and another set for LATIN columns.
  • Benefit
    • Data compression is the process by which data is encoded so that it consumes less physical storage space. This capability reduces both the overall storage capacity needs and the number of physical disk I/Os required for a given operation. Additionally, because less physical data is being operated on there is the potential to improve query response time as well.
  • Considerations
    • At some point, compressed data will have to be decompressed when required. This can cause the use of some extra CPU cycles but in general, the advantages of compression outweigh the extra cost of decompression.
· Multi-Value Compression For Varchar Columns
  • Example – Multi-Value Compression for Varchar Column:
  • CREATE TABLE Customer
  • (Customer_Account_Number INTEGER
  • ,Customer_Name VARCHAR(150)
  • COMPRESS (‘Rich’,‘Todd’)
  • ,Customer_Address CHAR(200));
· Block Level Compression
  • Description
    • Feature provides the capability to perform compression on whole data blocks at the file system level before the data blocks are actually written to storage.
  • Benefit
    • Block level compression yields benefit by reducing the actual storage required for storing the data, especially cool/cold data, and significantly reduce the I/O required to read the data.
  • Considerations
    • There is a CPU cost to perform the act of compression or decompression on whole data blocks and is generally considered a good trade since CPU cost is decreasing while I/O cost remains high.
User-Defined SQL Operators
  • Description
    • This feature provides the capability that will allow users to define and encapsulate complex SQL expressions into a User Defined Function (UDF) database object.
  • Benefits
    • The use of the SQL UDFs Feature allows users to define their own functions written using SQL expressions. Previously, the desired SQL expression would have to be written into the query for each use or alternatively, an external UDF could have been written in another programming language to provide the same capability.
    • Additionally, SQL UDFs allow one to define functions available in other databases and with alternative syntax (e.g. ANSI).
  • Considerations
    • The Teradata SQL UDF feature is a subset of the SQL function feature described in the ANSI SQL:2003 standard.
    • Additionally, this feature does not introduce any changes to the definition of the Dictionary Tables per se, but will add additional rows into the DBC.TVM and DBC.UDFInfo tables to indicate the presence of a SQL UDF.
SQL UDF - Example
The “Months_Between” Function:
CREATE FUNCTION Months _ Between
(Date1 DATE, Date2 DATE)
RETURNS Interval Month (4)
LANGUAGE SQL
DETERMINISTIC
CONTAINS SQL
PARAMETER STYLE SQL
RETURN(CAST(Date1 AS DATE)- CAST(Date2 AS DATE)) MONTH (4);
SELECT MONTHS_BETWEEN ('2008-01-01', '2007-01-01');
MONTHS_BETWEEN ('2008-01-01', '2007-01-01')
-------------------------------------------------------------------------------------------------------12

No comments:

Post a Comment