Thursday, 29 December 2016

Mapping Failure Due to The data channel used by this callback handler has.


Problem

many of the mappings are failing with error "[ICMD_10033] Command [RunMapping] 

failed with error [[ICMD_10000] [JSF_0010] 

The data channel used by this callback handler has been closed.  

No new events will be received on this handler.]."

 


Cause

The error occurs due to a broken connection between infacmd process and DIS process.

Connection may break due to DIS process termination, network errors, or heartbeat timeout on the connection. Heartbeat timeout, default of 40 seconds, can happen due to resource crunch.

Solution

To resolve this issue, identify if there are network errors or DIS termination and resolve those issues.

 If the failure is due to heartbeat timeout, check for system resource utilization to identify resource crunch and ensure no resource starvation for infacmd and DIS process. Alternatively, set domain-level custom property "HeartbeatTimeout" to higher value than 40 to increase tolerance of network delays [requires service restart].

Wednesday, 28 December 2016

HOW TO: Delete older object versions in a PowerCenter versioned repository



Problem Description 
There are older checked-in versions of PowerCenter objects in the recent PowerCenter folder or repository.
For the largest and most actively maintained folders, the responsive time while using Designer is slow.
Is there a way to clean them all?

Solution
This can be done using one of the following:
  • Advanced Purge
    Using the Repository Manager.
  • PurgeVersion
    Using the pmrep command line tool.
  • Purge Object Version
    Using the Repository Manager.

Advanced Purge (Repository Manager)


You can delete the older checked-in version in the Repository Manager using the Advanced Purge option as follows:
  1. Open Repository Manager.
  2. Connect to the versioned repository.
  3. Select one of the following:
    • Repository (to purge all objects in the repository)
    • Folder (to purge all objects in a folder)
    • Object (to purge one object)
  4. Choose Versioning > Advanced Purge Option.
  5. Select Active objects.
  6. Select either Older than x versions or Older than particular date or number of days

pmrep PurgeVersion (Command Line)


You can use the pmrep PurgeVersion command to purge older versions from a command prompt.

Note:

To use the PurgeVersion command, you must have the Administrator Repository privilege.

Syntax:

The PurgeVersion command uses the following syntax:
purgeversion
{-d <all | time_date | num_day> |
{-n <last_n_versions_to_keep> |
-t <time_date | num_day>}}
[-f <folder_name>]
[-q <query_name>]
[-o <output_file_name>]
[-p (preview purged objects only)]
[-b (verbose)]
[-c (check deployment group reference)]

Example:

The following example purges all versions of all deleted objects in the repository:
pmrep purgeversion -d all
The following example purges all but the latest checked-in version of objects in the folder1 folder:
pmrep purgeversion -n 1 -f folder1
The following example previews a purge of all object versions that were checked in 
before noon on January 5, 2005, and outputs the results to 
the file named purge_output.txt:
pmrep purgeversion -t '01/05/2005 12:00:00' -o purge_output.txt -p

Purge Object Version (Repository Manager)


To purge one object version do the following:
  1. Connect to the repository in Repository Manager
  2. Select the object to be purged
  3. Choose Versioning > View History option
    A pop up window will display all the versions of the selected object.
  4. Select all the versions of the object to be purged.
  5. Select Tools > Purge Object Version.
This can also be used in conjunction with the object query feature in the Repository Manager.
Please refer to the related articles below.


Tuesday, 27 December 2016

HOW TO: Recover a deleted object from a PowerCenter versioned repository

Solution :

If the PowerCenter repository is a versioned repository, it is possible to rollback changes to previous (checked-in) versions.
User gets an object (such as Mapping, Workflow or Transformation) deleted then it is no longer available through the GUI 
but all versions are still in the repository.

To recover the last version if you deleted an object you can first check if the object is still checked out:
  1. Select in the PowerCenter Client menu Versioning > Find Checkouts...
  2. Locate the deleted object, verify if it is the deleted object and restore the object through right-click > Undo Checkout
If the deleted object was already checked-in, you can recover the object by using the Repository Managers Query tool:
  1. Open in the Repository Manager's menu Tools > Queries...
  2. Create a new query at least with the filter parameter 'Version Status' 'Is Equal To' 'Deleted'
  3. You can define the query more specific if needed or use an existing one.
  4. Run the query.
  5. Locate the deleted object, example, by sorting for Time Stamp.
  6. Right-click on the object and select View History
  7. In the resulting list you should see all versions of the object. The latest one is the deleted version.
  8. Select the latest deleted version.
  9. In the menu select Tools > Purge Object Version
  10. When you then select Ok from the following dialog the version gets removed from the repository
    and the version before is active again.

Thursday, 8 December 2016

N is used to specify a unicode string

N is used to specify a unicode string.

In your example N prefix is not required because ASCII characters (with value less than 128) map directly to unicode.
However, if you wanted to insert a name that was not ASCII then the N prefix would be required.

Below 
​​

Select N'Wahi','Wahi'

Tuesday, 22 November 2016

Linux Commands for Port Find and Blxok

To get the list of portnumber netstat -tulpn

To Free Port Number : sudo fuser -k Port_Number/tcp

Monday, 14 November 2016

Automated Trigger Creation for Insert Operations on Tables

Select Table_Name, 
'CREATE or REPLACE TRIGGER TRG_'||Table_Name||' AFTER INSERT ON '||'MODELREPO.'||Table_NAME||
' FOR EACH ROW DECLARE BEGIN INSERT INTO LOCAL.'||Table_Name||' Values('||
Listagg(':new.'||Column_Name,',') within group (order by column_id) || '); END;'||chr(10) ||'/' TRG_Creation

from all_tab_columns where owner='LOCAL'
and table_name in (Select table_name from tablelist where num_rows>1)
group by Table_Name order by 1 


​ 



Friday, 4 November 2016

Useful Commands for Informatica Data Quality:

Useful Commands for Informatica Data Quality(10.1.0) :

Locate the infacmd.sh file location and run the commands the from there:
1. List Services. 

./infacmd.sh isp ListServices -dn @DOMAINNAME -un admin -pd "@PASSWORD" -sdn Native -hp HOSTNAME:PORTNUM -re 60 -st AS 

2. Export Objects without Control File 

 ./infacmd.sh oie ExportObjects -dn @DOMAINNAME -un admin -pd "@PASSWORD" -sdn Native -pn PROJECTNAME -rs MRSNAME -fp /tmp/ashif/exp_TrialProj.xml

3. List Mapping of a Deployed Application 


./infacmd.sh MS listApplicationMappings -da infav10:6005 -sdn Native -un Admin -pd "@PASSWORD" -sn DataIntegrationServiceName -a App_Quality_Trans


4. Running the Profiling 

Using ExecuteProfile : 

./infacmd.sh ps executeProfile -dn infav10_Domain -hp infav10:6005 -nn infav10_Node01 -un Admin -pd "#ld@msf4" -sdn Native -msn infav10_ModelRepo_Service -dsn  infav10_Integ_Service -opn Custom_Solutions/Prof_Policy_Details

Using Execute: 

./infacmd.sh ps execute -dn infav10_Domain -hp infav10:6005 -nn infav10_Node01 -un Admin -pd "#ld@msf4" -sdn Native -msn infav10_ModelRepo_Service -dsn  infav10_Integ_Service -ot profile -opn Custom_Solutions/Prof_Policy_Details

5. Running the Scorecard

./infacmd.sh ps execute -dn infav10_Domain -hp infav10:6005 -nn infav10_Node01 -un Admin -pd "#ld@msf4" -sdn Native -msn infav10_ModelRepo_Service -dsn  infav10_Integ_Service -ot scorecard -opn Custom_Solutions/Scorecard_Policy_Details

Using Both Informatica Workflow Parameter File and Session Parameter File Together

The only way I know is to use a workflow parameter file and a session parameter file.

 

By default, when you define a workflow parameter file and a session parameter file for a session within the workflow, the Integration Service uses the workflow parameter file, and ignores the session parameter file.

 

You can force the workflow to use both the workflow parameter file and session parameter file by adding the following to the workflow section of the workflow parameter file:

 

    $PMMergeSessParamFile=TRUE

 

This must be set for the section defined for the workflow in the workflow parameter file.

 

Example

    [DEV_FOLDER.wf_workflow]

    $PMMergeSessParamFile=TRUE

 

Note

If you define the same parameter or variable in both the session and workflow parameter files, the Integration Service sets parameter and variable values according to the following rules:

 

  1. When a parameter or variable is defined in the same section of the workflow and session parameter files, the Integration Service uses the value in the workflow parameter file.
  2. When a parameter or variable is defined in both the session section of the session parameter file and the workflow section of the workflow parameter file, the Integration Service uses the value in the session parameter file.

Wednesday, 26 October 2016

Script to Generate Trace Enable Statment in Oracle

SQL: 

SELECT s.username, s.SID, s.serial#, s.PROGRAM, p.spid ,
'EXECUTE dbms_system.set_sql_trace_in_session('|| s.SID||','||s.serial#||',true);' ENAB,
'EXECUTE dbms_system.set_sql_trace_in_session('|| s.SID||','||s.serial#||',true);' DISAB
FROM   v$session s,  

              v$process p 
WHERE p.addr = s.paddr   and s.username = 'SYS'  -- and s.Program='SQL Developer'
order by sid  

Tracing the Oracle database

Tracing the Oracle database:-

First open the sql developer and login as a user with administrator previlage. eg:-system


1)  Enable Oracle database to gather statistics (on session or system level)

ALTER SYSTEM SET  timed_statistics = true; 
ALTER SESSION SET  timed_statistics = true;


2) Find the SID, SERIAL# for a specific session you want to monitor  

SELECT username, sid, serial#, program FROM v$session WHERE username = <User_Name>; 


3)Enable the tracing for this session (  must be logged as SYSDBA ) 

EXECUTE dbms_system.set_sql_trace_in_session (<sid>, <serial#>, true); 

Bind Variables with Value:

EXECUTE dbms_monitor.session_trace_enable(41, 2216, waits=>true, binds=>true) ;

Query to Generate trace command based on the current session ids:

 SELECT p.tracefile,s.username, s.SID, s.serial#, s.PROGRAM, p.spid ,
'EXECUTE dbms_monitor.session_trace_enable('|| s.SID||','||s.serial#||', waits=>true, binds=>true) ;' ENAB,
'EXECUTE dbms_monitor.session_trace_enable('|| s.SID||','||s.serial#||', waits=>false, binds=>false) ;' DISAB
FROM   v$session s,

              v$process p
WHERE p.addr = s.paddr   and s.username = 'SYS'    and s.Program not like '%SQL%'
and SID<>274

order by sid 


4)Identify the directory where the trace file is generated 

SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Default Trace File';

5) Identify the name of the trace file which is generated 

  

SELECT s.username, s.SID, s.serial#, s.PROGRAM, p.spid 
FROM   v$session s,  

              v$process p 
WHERE p.addr = s.paddr and s.username = <User_Name>; 

  

The name of the trace file is  <oracle_sid>_ora_<p.spid>.trc 


6) Disable the tracing for this session (  must be logged as SYSDBA ) 

EXECUTE dbms_system.set_sql_trace_in_session (<sid>, <serial#>, false); 

Monday, 3 October 2016

Update Strategy - Session Settings in Informatica


Update Strategy - Session Settings in Informatica

Lets see the different settings that we can configure for update strategy at session level.

Single Operation of All Rows:

We can specify a single operation for all the rows using the "Treat Sources Rows As" setting in the session properties tab. The different values you can specify for this option are:

  • Insert: The integration service treats all the rows for insert operation. If inserting a new row violates the primary key or foreign key constraint in the database, then the integration service rejects the row.
  • Delete: The integration service treats all the rows for delete operation and deletes the corresponding row in the target table. You must define a primary key constraint in the target definition.
  • Update: The integration service treats all the rows for update operation and updates the rows in the target table that matches the primary key value. You must define a primary key in the target definition.
  • Data Driven: An update strategy transformation must be used in the mapping. The integration service either inserts or updates or deletes a row in the target table based on the logic coded in the update strategy transformation. If you do not specify the data driven option when you are using a update strategy in the mapping, then the workflow manager displays a warning. The integration service does not follow the instructions in the update strategy transformation.

Update Strategy Operations for each Target Table:

You can also specify the update strategy options for each target table individually. Specify the update strategy options for each target in the Transformations view on the Mapping tab of the session:

  • Insert: Check this option to insert a row in the target table.
  • Delete: Check this option to delete a row in the target table.
  • Truncate Table: check this option to truncate the target table before loading the data.
  • Update as Update: Update the row in the target table.
  • Update as Insert: Insert the row which is flagged as update.
  • Update else Insert: If the row exists in the target table, then update the row. Otherwise, insert the row.

The below table illustrates how the data in target table is inserted or updated or deleted for various combinations of "Row Flagging" and "Settings of Individual Target Table".

Row Flagging TypeTarget Table SettingsResult
InsertInsert is specifiedSource row is inserted into the target.
InsertInsert option is not specifiedSource row is not inserted into the target
DeleteDelete option is specifiedIf the row exists in target, then it will be deleted.
DeleteDelete option is not specifiedEven if the row exists in target, then it will not be deleted from the target.
UpdateUpdate as UpdateIf the row exists in target, then it will be updated.
UpdateInsert is specified
Update as Insert is specified
Even if the row is flagged as udpate, it will not be updated in Target. Instead, the row will be inserted into the target.
UpdateInsert is not specified
Update as Insert is Specified.
Neither update nor insertion of row happens
UpdateInsert is specified
Update else Insert is specified
If the row exists in target, then it will be updated. Otherwise it will be inserted.
UpdateInsert is not specified
Update else Insert is Specified
If the row exists in target, then it will be updated. Row will not be inserted in case if it not exists in target.

Friday, 30 September 2016

Oracle Flashback Query and Flashback Table

FLASHBACK TABLE

Purpose
Use the FLASHBACK TABLE statement to restore an earlier state of a table in the event of human or application error. The time in the past to which the table can be flashed back is dependent on the amount of undo data in the system. Also, Oracle Database cannot restore a table to an earlier state across any DDL operations that change the structure of the table.
Prerequisites
To flash back a table to an earlier SCN or timestamp, you must have either the FLASHBACK object privilege on the table or the FLASHBACK ANYTABLE system privilege. In addition, you must have the SELECTINSERTDELETE, and ALTER object privileges on the table.
Row movement must be enabled for all tables in the Flashback list unless you are flashing back the table TO BEFORE DROP. That operation is called a flashback drop operation, and it uses dropped data in the recyclebin rather than undo data. Please refer to row_movement_clausefor information on enabling row movement.
To flash back a table to a restore point, you must have the SELECT ANY DICTIONARY or FLASHBACK ANY TABLE system privilege or theSELECT_CATALOG_ROLE role.
To flash back a table to before a DROP TABLE operation, you need only the privileges necessary to drop the table.
Restrictions on Flashing Back Tables This statement is subject to the following restrictions:
  • Flashback Table operations are not valid for the following type objects: tables that are part of a cluster, materialized views, Advanced Queuing (AQ) tables, static data dictionary tables, system tables, remote tables, object tables, nested tables, or individual table partitions or subpartitions.
  • The following DDL operations change the structure of a table, so that you cannot subsequently use the TO SCN or TO TIMESTAMP clause to flash the table back to a time preceding the operation: upgrading, moving, or truncating a table; adding a constraint to a table, adding a table to a cluster; modifying or dropping a column; adding, dropping, merging, splitting, coalescing, or truncating a partition or subpartition (with the exception of adding a range partition).

TO SCN Clause
Specify the system change number (SCN) corresponding to the point in time to which you want to return the table. The expr must evaluate to a number representing a valid SCN.
SELECT current_scn FROM v$database;
TO TIMESTAMP Clause
Specify a timestamp value corresponding to the point in time to which you want to return the table. The expr must evaluate to a valid timestamp in the past. The table will be flashed back to a time within approximately 3 seconds of the specified timestamp.
TO RESTORE POINT Clause
Specify a restore point to which you want to flash back the table. The restore point must already have been created.
FLASHBACK QUERY EXAMPLE:

SELECT COUNT(*)
FROM   TableName AS OF TIMESTAMP TO_TIMESTAMP
('2004-03-29 13:34:12', 'YYYY-MM-DD HH24:MI:SS');



SELECT COUNT(*)
FROM   TableName  AS OF SCN 722452;
FLASHBACK TABLE EXAMPLE:

ALTER TABLE TableName ENABLE ROW MOVEMENT;

FLASHBACK TABLE TableName TO TIMESTAMP TO_TIMESTAMP('2004-03-03 10:00:00', 'YYYY-MM-DD HH:MI:SS');

FLASHBACK TABLE TableName TO SCN 715315;



Wednesday, 28 September 2016

Overriding Default Sorting in Informatica Lookup Transformation

Informatica adds the default order by clause for all ports present in the lookup condition. If you need to override the Order By clause, just put the SQL comment sign at the end of query. 

e.g. Select a, b, c, d from table order by b, c, a, d -- 

and ports b and c are present in lookup condition in that order, the query Informatica will generate will then be 
Select a, b, c, d from table order by b, c, a, d -- order by b, c 

Here because of your comment sign, the order by clause added by Infa will get commented out.

Wednesday, 14 September 2016

Create index with substring in SQL Server


In one of my earlier post , I have written about the multi row based constraint implementation in Oracle
http://mybicommunity.blogspot.in/2016/06/multi-row-based-constraints-in-oracle_8.html..

When  I try to implement the same in SQL Server 2008 R2 , I'm not able to achieve it as a straight forward like in oracle but it is achieveable through computed column/view .

Below Link : Holds the approach suggested by MSDN people.


https://social.msdn.microsoft.com/Forums/sqlserver/en-US/392ebaa0-2546-4abf-a1ff-6a9199be4a32/help-create-index-with-substring?forum=transactsql

Sunday, 11 September 2016

Query to Get the Count of All tables in a SQL Server Database



select 'select count(*) C,'+''''+table_name+''' from '+TABLE_SCHEMA+'.'+Table_name+ ' union all'  from INFORMATION_SCHEMA.tables 


Monday, 5 September 2016

Oracle Next Day Function

NEXT_DAY returns the date of the first weekday named by char that is later than the date date. The return type is always DATE, regardless of the datatype of date. The argument char must be a day of the week in the date language of your session, either the full name or the abbreviation. The minimum number of letters required is the number of letters in the abbreviated version. Any characters immediately following the valid abbreviation are ignored. The return value has the same hours, minutes, and seconds component as the argument date.
Examples
This example returns the date of the next Tuesday after February 2, 2001:
SELECT NEXT_DAY('02-FEB-2001','TUESDAY') "NEXT DAY"       FROM DUAL;   
 NEXT DAY  -----------  06-FEB-2001

Sunday, 28 August 2016

Difference between DateTime and DateTime2 DataType

Difference between DateTime and DateTime2 DataType

DateTime2 is the new Data Type introduced in Sql Server 2008 for storing Date

and Time value. As per MSDN, Microsoft Suggests to use this new Data Type 

for new work instead of DateTime.


Following table summarizes some of the major difference between this
new DateTime2 and the old DateTime Data Type.
DateTimeDateTime2[(n)]
Min Value1753-01-01 00:00:000001-01-01 00:00:00
Max Value9999-12-31 23:59:59.9979999-12-31 23:59:59.9999999
Storage Size8 Bytes6 to 8 bytes
Note: Parameter n is optional and if it is not specified then fractional
seconds precision is 7 digit and it can be from 0 to 7 digit. For fractional seconds
precision <3, takes 6 bytes For fractional seconds precision 3 or 4 it will take
7 bytes For fractional seconds precision >4 it will take 8 bytes
UsageDeclare @now datetimeDeclare @now datetime2(7)
ComplianceIs not an ANSI/ISO compliantIs an ANSI/ISO compliant
Current Date and Time functionGetDate() – It returns DB Current Date and Time of DateTime Data Type
Example: SELECT GETDATE()
Result: 2011-09-16 13:23:18.767
SYSDATETIME()– It returns DB Current Date and Time of DateTime2 Data Type
Example:SELECT SYSDATETIME()
Result: 2011-09-16 13:23:18.7676720
+/- daysWORKS
Example: 
DECLARE
 @nowDateTimeDATETIME = GETDATE() SELECT@nowDateTime + 1
Result: 2011-09-17 13:44:31.247
FAILS – Need to use only DateAdd function
Example: 
DECLARE
 @nowDateTime2 DATETIME2=
SYSDATETIME()
SELECT
 @nowDateTime2+1
ResultMsg 206, Level 16, State 2, Line 2
Operand type clash: datetime2 is incompatible with int

Wednesday, 24 August 2016

Multiple Pivots in Sql Server

CREATE TABLE TblPivot
(
Time_Dt Date,
Workflow_Id int not null identity(1,1),
Completed_No int,
Average Numeric(6,2)
)


insert into TblPivot
(Time_Dt,Completed_No,Average) values
('2016-08-24',null,1.5)

insert into TblPivot
(Time_Dt,Completed_No,Average) values
('2016-08-24',null,1.6)


insert into TblPivot
(Time_Dt,Completed_No,Average) values
('2016-08-24',null,2.6)

insert into TblPivot
(Time_Dt,Completed_No,Average) values
('2016-08-24',100,2.9)






Select Time_Dt,sum(Coalesce(([10004]),0)) Completed,
sum(Coalesce(([1]),0)) Workflow_1,
sum(Coalesce(([2]),0)) Workflow_2,
sum(Coalesce(([3]),0)) Workflow_3,
sum(Coalesce(([4]),0)) Workflow_4
from 
(SELECT
*
 FROM
 (
    SELECT
Time_Dt,Workflow_Id,
case when Completed_No is not null then 10000+Workflow_Id end Count_Workflow_Id,
Completed_No,Average
 FROM TblPivot
 ) AS P
 

PIVOT
(
  Sum(Completed_No) FOR Count_Workflow_Id IN ([10004])
) AS pv1
 
-- For ItemName
PIVOT
(
 Sum(Average) FOR  Workflow_Id in ([1], [2], [3],[4])
) AS pv2 )a group by Time_Dt


--

Regards,
Ashif Ahamed 
 


"We can't solve problems by using the same kind of thinking we used when we created them - Albert Einstein"

Sunday, 21 August 2016

Base Query to Date Difference Between Overlapping and Non-Overlapping Dates

CREATE TABLE dbo.Projects

( projectid  INT          NOT NULL,

  title      VARCHAR(100) NOT NULL,

  start_date DATE         NOT NULL,

  end_date   DATE         NOT NULL);

Go

INSERT INTO dbo.Projects(projectid, title, start_date, end_date) VALUES

  (1, 'Project 1', '20100212', '20100220'),

  (2, 'Project 2', '20100214', '20100312'),

  (3, 'Project 3', '20100124', '20100201'),

  (4, 'Project 4', '20100401', '20100410'),

  (5, 'Project 5', '20100420', '20100425'),

  (6, 'Project 6', '20100501', '20100509'),

  (7, 'Project 7', '20100409', '20100415'),

  (8, 'Project 8', '20100414', '20100421'),

  (9, 'Project 9', '20100605', '20100610'),

  (10, 'Project 10', '20100614', '20100621'),

  (11, 'Project 11', '20100601', '20100625');

Go

והשליפה הרקורסיבית:

With T1 As

(Select    Row_Number() Over(Order By Start_date) N,

        *

From    Projects),

T2 As

(Select 1 N,

        start_date,

        end_date

From    T1

Where    N=1

Union All

Select    T2.N+1,

        T2.start_date,

        Case When T1.end_date>=T2.end_date Then T1.end_date Else T2.end_date End

From    T1

Inner Join T2

        On T2.End_date >= T1.start_date

Where    T1.N=T2.N+1

Union All

Select    T2.N+1,

        T1.start_date,

        T1.end_date

From    T1

Inner Join T2

        On T2.End_date < T1.start_date

Where    T1.N=T2.N+1

)

Select    start_date,

        Max(end_date) end_date

From    T2

Group By start_date;

הפלט המתקבל:

start_date    end_date

2010-01-24    2010-02-01

2010-02-12    2010-03-12

2010-04-01    2010-04-25

2010-05-01    2010-05-09

2010-06-01    2010-06-25

Wednesday, 8 June 2016

Multi Row Based Constraints in Oracle


Requirement from Client 

 As an advocate of enforcing as much integrity as possible declaratively, I am 
disappointed that little has changed in that arena in 10G.  CHECK constraints are still 
limited to simple single-table, single-row checks, and triggers have to be used to 
enforce any more complex rules.

Do you think Oracle will ever add multi-table check constraints or ASSERTIONS as defined 
by ANSI SQL?  Or has declarative integrity gone about as far as it is going to in Oracle? 
and we said...

I'm asking around about the future of "SQL" as far as enhancments go like that (will 
update when I get some feedback), but -- you are not limited to triggers (in fact, I 
would avoid triggers as it is virtually IMPOSSIBLE to implement cross row/cross object 
constraints with them!!!!  at least correctly)

I look to database objects to enforce them - lets look at a couple of cases (i know, 
these are not assertions, these are not domains -- they are working solutions using 
existing features in a way "perhaps not intended")

o case: we need to have a rule that enforces unique names for "current" projects.   That 
is, we have a projects table, there are active projects and historical (completed) 
projects.  the project name for all active projects must be unique.  historical projects 
can have duplicates.  


ops$tkyte@ORA9IR2> create table projects
  2  ( id         int primary key,
  3    status  varchar2(1) not null check (status in ('A','C')),
  4    name       varchar2(10)
  5  )
  6  /
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create unique index proj_name_idx on projects
  2  ( case when status = 'A' then name end )
  3  /
 
Index created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into projects values ( 1, 'C', 'hello' );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into projects values ( 2, 'C', 'hello' );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into projects values ( 3, 'A', 'hello' );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into projects values ( 4, 'A', 'hello' );
insert into projects values ( 4, 'A', 'hello' )
*
ERROR at line 1:
ORA-00001: unique constraint (OPS$TKYTE.PROJ_NAME_IDX) violated


function based indexes can be used to implemented some interesting and complex rules 
for conditional uniqueness

Some will say this is better solved via a data model change (eg: a projects table with 
subtypes "active" and "completed" -- but sometimes this is just easier to have a single 
physical table)

o case averages/mins/counts whatever "by something".  refresh fast on commit MV's can be 
useful.  consider, we want between 1 and 6 employees/department:

ops$tkyte@ORA9IR2> create table emp as select * from scott.emp;
Table created.
 
ops$tkyte@ORA9IR2> alter table emp add constraint emp_pk primary key(empno);
Table altered.
 
ops$tkyte@ORA9IR2> create materialized view log on emp with rowid, (deptno) including new 
values;
Materialized view log created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create materialized view emp_mv
  2  refresh fast on commit
  3  as
  4  select deptno, count(*) cnt from emp group by deptno
  5  /
Materialized view created.
 
ops$tkyte@ORA9IR2> alter table emp_mv
  2  add constraint check_cnt
  3  check ( cnt between 1 and 6 )
  4  deferrable
  5  /
Table altered.
 
ops$tkyte@ORA9IR2> insert into emp (empno,deptno) values ( 1000, 20 );
1 row created.
 
ops$tkyte@ORA9IR2> commit;
Commit complete.
 
ops$tkyte@ORA9IR2> insert into emp (empno,deptno) values ( 1001, 20 );
1 row created.
 
ops$tkyte@ORA9IR2> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02290: check constraint (OPS$TKYTE.CHECK_CNT) violated

the logic is to "materialize" the aggregation and add a check constraint to the 
materialization



 

Friday, 3 June 2016

Issue with Informatica Loading into Partitioned Oracle Target Table

"ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired" when Metadata Manager session fails


Problem Description

I m facing a issue in regard to loading into Partitioned Oracle Target table.
 
We have 2 sessions having same table in Oracle as Target
     a. INSERT data into Partition1
     b. UPDATE data in Partition2
 
We are trying to achieve parallelism in the workflow, and there are more Partitions and sessions to be created
 for different data but into same table, but different partitions..
 
Currently when we run both these sessions parallely, the Update session runs successfully, but the INSERT 
session gets a NOWAIT error.
NOTE: both are loading data for different partitions.

Cause
The session is running in Bulk mode to target. Bulk mode requires full table lock and at times this leads to an issue.
This is an Oracle returned error. It implies, Oracle cannot handle the bulk mode on that target table at that point of time.
Solution
To resolve this issue, turn off the Bulk mode. Edit the concerned session in Workflow Manager tool and
 make the session to run in normal mode: edit the session > Mapping tab > Target object > Target load type and change the value to Normal. 
Save the session.

Tuesday, 24 May 2016

Windows Authentican For MSSQL in Talend

The solution is to place ntlmauth.dll in windows/system32 (or sysWow64) folder. but instead of placing it in system32 folder, we can place it in our own local folder and loaded by our talend jobs while establishing a connection to MS SQL Server.


Design time configuration:

1. Copy ntlmauth.dll to bin folder inside talend installed location. bin folder wont be present, create your own.

2. While executing the job, go to Advanced settings and select "Use Specific JVM arguments" and add the following additional argument

-Djava.library.path=./bin

3. Execute the talend job, it will use windows authentication

Inline image 1


Runtime configuration

1. While creating the package, ensure "bin" folder is also bundled along with ntlmauth.dll
2. Edit the .bat or .sh file and add the following entry

-Djava.library.path=./bin


Inline image 2

3. Execute the batch file, the job should run without any SSO error.