Wednesday, July 21, 2010

Calculate Grand Total in SQL Query

0 comments
To calculate a grand total within an SQL query, the keyword available for this purpose is COMPUTE Keyword which is used with out a BY clause


SELECT itemid, Qty, actualPrice, subtotal
FROM SalesOrderDetail s
WHERE  actualPrice< 1000
COMPUTE SUM(Qty), SUM(subtotal)

Wednesday, June 23, 2010

Date wise SQL Query

0 comments
Column Level SQL Query to extract data with date as condition

Select employee_id, employee_name
(Select count(*) from employee_leave el where e.employee_id=el.employee_id
and el.leave_date between e.hire_Date and '2010-06-20 23:59:59')
from tbl_employee e
where e.hire_date between '2010-06-01 00:00:00' and '2010-06-01 23:59:59'
here in the above query for extracting the total leaves the final date has been set from the employee hire date.

Tuesday, May 25, 2010

Generate Mapping against target Step

0 comments
In cases where you have a fixed target table, you will want to map the fields from the stream to their corresponding fields in the target output table. This is normally  accomplished using a Select Values step in your transformation. The 'Generate mapping against target' option provides you with an easy to use dialog for defining these mappings that will automatically create the resulting Select Values step that can be dropped into your transformation flow prior to the table output step. To access the 'Generate mapping against target' option is accessed by right clicking on the table output step.

After defining your mappings, select OK and the Select Values step containing your mappings will appear on the workspace. Simply, attach the mapping step into your transformation immediately Attach the mapping step into your transformation just before the table output step.

Pentaho Spoon Installation

1 comments
The first step is the installation of Sun Microsystems Java Runtime Environment version 1.4 or higher. You can download a JRE
After this, you can simply unzip the zipfile: Kettle3.0.0. zip in a directory of your choice. In the Kettle directory where you unzipped the file, you will find a number of files. Under Unixlike environments (Solaris, Linux, MacOS, …) you will need to make the shell scripts executable. Execute these commands to make all shell scripts in the Kettle directory executable:

cd Kettle
chmod +x *.sh

Starting Spoon:.

To launch Spoon on the different platforms these are the scripts that are provided:
Spoon.bat: launch Spoon on the Windows platform.
sh spoon.sh: launch Spoon on a Unixlike
platform: Linux, Apple OSX, Solaris, ...
If you want to make a shortcut under the Windows platform an icon is provided: “spoon.ico” to set the correct icon. Simply point the shortcut to the Spoon.bat file.

Supported platforms
  1. The Spoon GUI is supported on the following platforms:
  2.  Microsoft Windows: all platforms since Windows 95, including Vista
  3.  Linux GTK: on i386 and x86_64 processors
  4.  Apple's OSX: works both on PowerPC and Intel machines
  5.  Solaris: using a Motif interface (GTK optional)
  6.  AIX: using a Motif interface
  7.  HPUX:


Friday, May 21, 2010

Group by SQL Query

0 comments
To extract Country wise sales count for the period  MAY-2010 use below code.


Select c.country_name, count(s.id)
from tbl_sales s inner join tbl_country c on s.country_id=c.country_id
where s.saels_date between '2010-05-01 00:00:00' and '2010-05-31 23:59:59'
group by c.coutnry_name 

Sunday, May 16, 2010

How to Insert Extracted Values in Pentaho Spoon

0 comments

To insert the extracted records  from the transactional source database into the destination or data warehouse  following is simple way to do it.

Open the Pentaho Spoon go to file menu and open new Transformation.

in the design mode drill down the Input Folder and drag & drop the Table Input Step, double click the Table Input  and set the source database (How to Set Connection Parameters in Pentaho Spoon)connection parameters this will extract all required fields

image

Again go to the design mode and this time drag & drop the INERT / UPDATE Step lying under the OUT PUT Folder now hold the Shift key and place your mouse icon over the TABLE INPUT and drag upto the INERT / UPDATE Step this will generate a Green color Arrow called a hope with its tail at the TABLE INPUT and head towards INERT / UPDATE Step.

You can enable, disable or even delete the hope by placing your mouse over it and with right click. now double click the  INERT / UPDATE Step image

and set target Database Connection by clicking the New Button for the connection and Edit if you have it already.

For the first grid Click the Get Fields Button this will fill grid with all values you have queried in the previous Table Input Step here in this grid keep only those fields on the basis of which you will insert record.

Loading XML Data Using Pentaho ETL Tool

0 comments
To load the data from a XML file using pentaho kettle do the followings. 
Go to the design mode, drill down the input folder and drag & drop the XML Input Step.image
Double click the XML Input Step as shown in the image below
image
in the first tab of file click the Brows Button and add your desire XML File by clicking Add Button this will add your file(s) inside the selected files Grid under the File /Directory Grid Label.
Below this Grid there is a Show File(s) options that allow you to view your complete files addresses.
Nnow go to the next tab of Fields followed by Content tab.
here in this Fields tab click the Get Fields Button lying below the Grid clicking this will extract all the contents of the XML file along with their data types and positions.
To view all these contents of your XML file you can use the XML OUTPUT Step available under the Out put folder. 


SQL Query For Monthly Calculation

0 comments
To get individual library member's monthly status report use the following column level query that includes how many books have been issued and returned in the month of may 2010 for those members who have their registration date in the same month here is the query

select m.member_id as "Member ID",m.member_name as "Member Name"
         ,(select count(book_id
         from tbl_book_issue b
         where m.member_id=b.member_id
        and b.status=1)as "Total Issued Books"  

       , (select count(book_id
         from tbl_book_issue b
           where m.member_id=b.member_id
        and b.status=2)as "Total Returned Books"  #status=1 for issued and status=2 for return

from tbl_member m
where m.registration_date between '2010-05-01 00:00:00' and '2010-05-31 23:59:59'


Saturday, May 15, 2010

Concatenation in Pentaho Kettle

0 comments
To get the  concatenated resulted values of the two variables do the followings.
in the design mode drill down the Scripting folder drag and drop the  Modified Java Script Value step
image
Double click the icon  and go the java script value Text area, copy the below code and past it into the Script Value Text Area
var fullName =  FirstName+ LastName;
image
here in this code of java “var” is the reserved word for declaring String variables while “fullName is  the name of variable that will hold the resulted or concatenated value, you can view the “fullName” variable contains the resulted value showing in the grid by clicking get Variable button at bottom of the step.

How to Split a String in pentaho Kettle

0 comments
To get values from a comma separated string we can spilt it in following way.
In the design mode drill down the Scripting folder drag and drop the  Modified Java Script Value step
image
Double click the the icon  and go the java script value Text area, copy the below code and past it into the Script Value Text Area
var employee = EmployeeInfo.getString().split("\\,");
var ID = employee [0];
var  Name= employee [1];
var Address = employee [2];
image
here in this code of java “var” is the reserved word for declaring String variables while “ID”,”name”,”address” are the names of variables that will hold the values after splitting and the EmployeeInfo is the String that contains the ID, Name and Address in the form i.e. 1234,Ali,pakistan

Thursday, May 13, 2010

Date to String Conversion in Pentaho Kettle

0 comments
Here is the simple conversion from Date to String in the spoon
Go the design mode select Modified Java Script Step lying under the Scripting folder drag & drop it.
image
Double click the Java Modified Script  Step and write the following code in the scriptVale
var dateToStringConversion = date2str(datefield_name, "yyyyMMdd");
where dateToStringConversion  is the name of the variable and datefield_name is the field name to whom you want to convert which is coming from the previous step

How to get Distinct Count in Pentaho

0 comments
In order to get Distinct Count in the Kettel spoon
Go to the design mode drill down the Transformation folder and Drag & drop the  Unique Rows Step
image
this step remove the double and repeated Rows and give only unique occurrences but this works only on sorted input so you need to set a Sorted Rows Step available under the Transformation Folder jus drag & drop it  and make the hope between the two steps
image
If the input is not sorted, then only double consecutive rows are handled correctly.

How to Write ETL Script in Pentaho

0 comments
In this posting i m going to exemplify for the beginners that how to write a simple script in the Pentaho Kettle Spoon let say extracting the data from the source or transactional database or from the one data source to destination or target or  another database, here is the example steps.
Extract Customer information from Customer_database from MYSQL
Transform and Load The extracted data in SQL SERVER Customer_database Table
Open the Pentaho Spoon
Go to the file menu and select Transformation or press Ctr+N
Go to the design mode and select Table Input from the Input folder and the Table Output Steps from  the Output folder respectively
Press Shift button and drag from Table input to Table Out Put called a hope having green color that showing all the values shifting to another step.
image
Extracting required fields from the source table representing here in this example as Table Input from MYSQL and loading all the fields in the destination database table representing here as Table output in SQLSERVER.
Double click the each step to set source and destination connections as shown below image name the connection, if working on the local system name local host else IP
select MYSQL from the connection type set ,user id and password followed by the database name and test the connection.
image
after defining the connection write the query in the SQL text area and double click the Table Output set the connection same as defined above here you also have to define the target table that will hold the extracted data as can view in the below image.
image

Wednesday, May 12, 2010

Check Table Column Existence in Pentaho

0 comments
In order to find out the existence of a specific column in a database use the Check if Column Exists Step available under the Lookup Folder in the design mode  
image
  1. Drag & Drop the above icon.
  2. image
  3. Double click the step and name it , defined the connection for MYSQL or SQL SERVER etc.
  4. Table Name: The name of the column to whom the column belongs.
  5. Table name in  Field: Enabled it if you want to read the name of the table from the input field.
  6. Tablename Field:Set fields here that have parameters and its type 
  7. Columnname Field:Set the name of the column field reading from the input field.
  8. Result fieldname:Resulting Boolean flag  that shows weather it exist or not.



Performing Calculations in Pentaho Spoon

0 comments
To perform simple calculations and get new fields on different output fields extracting from the database table on constant values.
  1. Go to Transformation in the design mode
  2. Drill Down the Transformation folder and drag & drop the Calculator
  3.    image
  4. Double Click the Calculator Step a pop window same as below will appear
  5. image
  6. In the New Field  column name your new Field
  7. Select the type of calculation from the list box on the fields you want to perform calculations.
  8. Field A, Field B and Field C combos are available to select your fields that are need to be performed calculations
  9. Select the value type from the combo box and set decimal precision.

Aggregate Functions in Pentaho Spoon

0 comments
To Calculate the Aggregate Functions likewise Sum(), Average(), Count(), MIn(), Max(), First() and Last() usually on the data coming and reading from text files do the followings

  1. Drill Down the Input in the design mode and select Text Input File

  2. image

  3. Double Click the Text File Input in the File tab and brows your file add it  to the selected path

  4. In the Content tab select File Type

  5. Go to Fields tab and click the Get Fields , it will extract all the fields along with data their data types, reading each coma separated field

  6. Go to the design mode again find Deprecated the 3rd last folder drill down it and drag & drop the Aggregate Rows Step as the same icon  mentioned below.

  7. image

  8. Double click the Aggregate Rows a pop window will open click Get Fields it will extract all the fields as it has read from the Text Input File, set the name for each field in the New Name column and finally select the aggregate function from the combo box.
image

What is Checksum in Pentaho Spoon

0 comments
By definition checksum is a way to determine the integrity of data in which both the sender and receiver can test the data integrity weather it is correct or not, the sender calculates the checksum value  by taking the sum of binary data transmitted on the other end the receiver can perform the same calculation on the data and compare it with the checksum value provided by the sender. If the two values match, the receiver has surety that the data was received correctly.
Pentaho Spoon provide the Add a Checksum Step  lying under the  Transformation Tab.


  1. Go to Design
  2. Drill Down Transformation

  3. Drag & Drop Add a Checksum

  4. image

  5. Name the Step

  6. Define the Checksum Algorithm most used Algorithm for  Checksum is MD-5

  7. Click the Get Fields Button, this will extract and show all the fields coming from the previous step at which your going to apply  checksum might be extracting using Input Step from the database table or Stored Procedure
image

Tuesday, May 11, 2010

Hex to integer Conversion in Kettle

0 comments
To convert your Hex value to integer in spoon do the followings
Go to the design mode.
Drill Down the Scripting step.
Drag and drop the Modified Java Script Value.
image
Double Click the Modified Java Script Value and drill down the Transformation Script available at the left hand side of the Script Mod, double click the ScriptValue
in the Script Value paste the following lines
java;
var nummerke = Packages.java.lang.Integer.valueOf(Hex.getString(), 16).intValue();
here Hex is your input value coming from another step, at the bottom of a Script Mod click the Get fields Button now you have get the converted variable with its data type showing in a grid, finally test the script.
image

How to Set Environment Variables in Kettle

0 comments
To set Environment Variables go to Edit menu and click Set Environment Variables or press CTRL+ALT+J, in the name column name your variable and assigned it to any value by default the variable is of String type. You can later in your job or transformation access these variable using GET VARIABLE STEP but it is recommended not to use Environment Variables if your working in multiple or more than two Transformations  or jobs as these variable have global scope and  Changes to the environment variables would conflict in Transformations or jobs for this reason Kettle variables were introduced to provide a way to define variables that are local to the job in which the variable is set. The Set Variable step in a transformation allows you to specify in which job you want to set the variable's scope (i.e. parent job, grandparent job or the root job).
image 
in the same tab of Edit menu you can see your all Environment Variables using Show Environment Variables option, here you can also up and down your Variable on clicking name column header.

Monday, May 10, 2010

What is Get Variable in Pentaho Kettle

0 comments
image
The Get variable step is used to get access your declared variable’s values, whenever you need a variable value you have declared earlier in your script using Set Variable step what you need to do is drag and drop a Get Variable step and do the followings.
image
  1. Go to the Job Folder drag & drop a Get Variable Step
  2. Nam the value of accessing variable that you can use in other steps
  3. in the second column give the same name of the required variable as you have declared it  in Set Variable Step in below format
  4. ${variable} or %%variable%%
Example:
variable value               ${my_variable as declared in Set Varible}

What is Set Variable in Pentaho Kettle

0 comments
 image
The Set Variable is used to set or declare variables to hold values usually database fields likewise you want to set start_date value which you have extracted from your time_dimension table using INPUT Step so that this start_date field will be accessible through out your script.
image
in the upper text filed named the step and then click the get fields button to have all desired table fields in the field name column that you have extracted using input step, in the second step name the variable to each field name and in the last column define its scope. In order to avoid null values that might come in your queries using INPUT for setting different variable the last column here to set default value that will be very useful let say if there is null value it will set it to default value.
image

Sunday, May 9, 2010

Storing Transformations and Jobs in local Files System

0 comments

Spoon provides you with the ability to store transformation and job files to the local file system or in the Kettle repository. The Kettle repository can be housed in any common relational database. This means that in order to load a transformation from a database repository, you need to connect to this repository. To do this, you need to define a database connection to this repository. You can do this using the repositories dialog you are presented with when you start up Spoon bat file you will come across a welcome window in order to work for storing your transformations and jobs files in local file system and just click no repository. 
                                                                                   

What is Pentaho

0 comments

The Pentaho BI Project is open source application software for enterprise reporting, analysis, dashboard, data mining, workflow and ETL capabilities for business intelligence needs.

What is Spoon
Kettle is an acronym for “Kettle E.T.T.L. Environment”. This means it has been designed to help you with your ETTL needs: the Extraction, Transformation, Transportation and Loading of data. Spoon is a graphical user interface that allows you to design transformations and jobs that can be run with the Kettle tools Pan and Kitchen. Pan is a data transformation engine that is capable of performing a multitude of functions such as reading, manipulating and writing data to and from various data
sources.Kitchen is a program that can execute jobs designed by Spoon in XML or in a database repository. Usually


Transformations and Jobs can describe themselves using an XML file or can be put in a Kettle database

repository. This information can then be read by Pan or Kitchen to execute the described steps in the
transformation or run the job.

ETL Scripting with Open Source ETL Tools

2 comments

Extract Transform Load (ETL)
Extracting, transforming, and loading (ETL) is a process used in database especially in data warehousing and have followin steps
Extracting data from sources (The transactinal Database)
Transforming it to fit operational needs (which can include quality levels)
Loading in destination (database or data warehouse or staging where you practice about data)

Extraction:
The first part of an ETL process involves extracting the data from the source systems usually called a transactional database where actual transactions are perforemd. Most data warehousing projects consolidate data from different source systems like Relational Databases.
Extraction converts the data into a format for transformation processing. An intrinsic part of the extraction involves the parsing of extracted data, resulting in a check if the data meets an expected pattern or structure. If not, the data may be rejected entirely or in part. This step invloves extracting all the data as mentioned in the written SQL queries and procedure from the source system.

Transformation:
The transform stage applies a series of rules or functions to the extracted data from the source to derive the data for loading into the end target. Some data sources will require very little or even no manipulation of data 

Loading
This step involves process lodaing data which has extracted from one source database say MYSQL to load in SQLSERVER The load phase loads the data into the end target, usually the data warehouse (DW). Depending on the requirements of the organization, this process varies widely. Some data warehouses may overwrite existing information with cumulative, frequently updating extract data is done on daily, weekly or monthly. while other DW (or even other parts of the same DW) may add new data in a historicized form, for example, hourly
  
Example:
It might be your requirement that you need to accumulate data that is residing in different database like wise Customer Sales information on MYUSQL  and its support and maintenance information on SQL SERVER so here you need to gather the information in order to present it the ETL script is the best solution for you.

BEST Practices:
Four-layered approach for ETL architecture design 
  • Functional layer: Core functional ETL processing (extract, transform, and load).
  • Operational management layer: Job-stream definition and management, parameters, scheduling, monitoring, communication and alerting.
  • Audit, balance and control (ABC) layer: Job-execution statistics, balancing and controls, rejects- and error-handling, codes management.
  • Utility layer: Common components supporting all other layers.
Open Source ETL Tools:
  • Apatar
  • CloverETL
  • Flat File Checker
  • Jitterbit 2.0
  • Pentaho Data Integration (now[update] included in OpenOffice Base)
  • RapidMiner
  • Scriptella
  • Talend Open Studio

Saturday, May 8, 2010

SELECT AS CLAUSE

0 comments

In SQL Query you can define column heading jus write heading near to field name with out any space but if you want to make space in column heading then use AS clause like wise
Column Heading with out AS Clause
SELECT LASTNAME, SALARY TotalSalary
FROM EMP
WHERE WORKDEPT = 'C01'
ORDER BY 2 DESC
Column Heading with space
SELECT LASTNAME, SALARY + BONUS + COMM AS TOTAL EANINGS
FROM EMP
WHERE WORKDEPT = 'C01'
ORDER BY TOTAL_EANINGS DESC

Calculations in SQL Query

0 comments
What are total earnings for sales reps?

SELECT LASTNAME, SALARY + BONUS + COMM
FROM EMP
WHERE JOB = 'FIELDREP'
Which operators have total earnings less then $20,000?
SELECT LASTNAME, SALARY + BONUS + COMM
FROM EMP
WHERE JOB = 'OPERATOR'
AND SALARY + BONUS + COMM < 20000

SQL Distinct Unique Records

0 comments

The Below Query will select all rows this will also include repetative or redundent rows i..e un necessary or duplicated rows  i.e. UNIQUE Records
SELECT WORKDEPT
FROM EMP
Distinct rows:
This is will extract all the rows but will remove the duplicated rows
1.SELECT DISTINCT WORKDEPT
FROM EMP
2.SELECT DISTINCT WORKDEPT, JOB
FROM EMP
Distinct Count
One person can send many feedack / comments against this post so count the unique no of person who posted their comments
                                 SELECT POSTED_BY,COUNT(DISTINCT COMMENTS)
                                 FROM FEEDBACK
                                 GROUP BY POSTED_BY 

SQL IN, NOT IN and NOT BETWEEN Clauses

0 comments
IN Cluase
The IN clause onlty extract those values that match or exist in the database against the values which are being given in the IN clause but it does not  diplay the values that do not meet the condition not even with a null values that is why some time we need to make a temporary table to stroe these values and then they are extracted using left joins so that all the values should be visible.

1. SELECT EMPNO, LASTNAME, SALARY
FROM EMP
WHERE WORKDEPT IN ('D11', 'D21', 'E11')
2.SELECT *
FRROM EMPLOYEE
WHERE  COUNTRY IN ('PAKISTAN','UK','US','AUTRALIA' )
NOT INTo select employees that has not been hired in the month of MAY 2010

SELECT *
FROM EMPLOYEE
WHERE HIRE_DATE NOT BETWENN '2010-05-01 00:00:00' AND '2010-05-31 23:59:59' 

Not IN is the revers of IN clause, it is advised if there are only few values then it is fine else avoid using NOT IN becuase it effect your performance istead use EXIST or NOT EXIST clause it is much faster then the IN or NOT IN 

2.SELECT *
FRROM EMPLOYEE
WHERE COUNTRY NOT IN ('PAKISTAN','UK','US','AUTRALIA' )

NOT BETWEEN

SQL Between Clause

0 comments
The BETWEEN expression  with AND specified the range and tells the database that to select desired records for this range like wise to find out all those transactions that have been done in the month of may-2010.
Example

Select c.customer_name,i.invoice_no,i.invoice_Date
from sales s
inner join customer c on s.customer_id=c.customer_id
where s.sales_date between '2010-05-01 00:00:00' and '2010-05-31 23:59:59'

SQL Multiple Conditional Queries

0 comments
Which woman designers have an annual salary over $22,000?

Exampes
1.SELECT LASTNAME, SALARY
FROM EMP
WHERE SALARY > 22000
AND JOB = 'DESIGNER'
AND SEX = 'F'

Who makes more than $40,000 or is a manager?
2.SELECT LASTNAME, FIRSTNAME, SALARY, JOB
FROM EMP
WHERE SALARY > 40000 OR
JOB = 'MANAGER'

3.Which Filed Rep makes more than $2,000 in commissions or who has an education lever higher then 18?
SELECT LASTNAME, JOB, COMM
FROM EMP
WHERE (COMM > 2000 AND JOB = 'FIELDREP')
OR EDLEVEL > 16


4.SELECT LASTNAME, JOB, COMM
FROM EMP
WHERE COMM > 2000 AND (JOB = 'FIELDREP'
OR EDLEVEL > 16)










SQL NULL Conditional Queries

0 comments
A special value that represents absence of information.

Examples: 
1. SELECT DEPTNO, DEPTNAME, MGRNO
FROM DEPT
WHERE MGRNO IS NULL
SQL Server
Select top ten emplyees who do not have any manager in SQLSERVER
select top ten emplyees who do not have any manager in mysql2. SELECT TOP 10 *
FRO M EMPLOYEE
WHERE MANGAER IS NOT NULL

                            MYSQL
                           Select top ten emplyees who do not have any manager in mysql
                           2. SELECT  *
                           FROM EMPLOYEE
                           WHERE MANGAER IS NOT NULL
                           limit 10



SQL Comparison Operators

0 comments
= Equal
<> Not Equal
> Greater than
>= Greater than or equal
< Less than
<= Greater than or equal
Examples:
SELECT EMPNO, LASTNAME, SALARY
FROM EMP

WHERE SALARY >= 20000

SELECT LASTNAME, FIRSTNAME, JOB
                                    FROM EMP
                                    WHERE JOB <> 'DESIGNER'

Saturday, May 1, 2010

Candidate Key

0 comments
In the relational model of databases, a candidate key of a relation is a minimal superkey for that relation; that is, a set of attributes such that


1. the relation does not have two distinct tuples with the same values for these attributes (which means that the set of attributes is a superkey)

2.there is no proper subset of these attributes for which
(1) holds (which means that the set is minimal).


Since a relation contains no duplicate tuples, the set of all its attributes is a superkey if NULL values are not used. It follows that every relation will have at least one candidate key.



The candidate keys of a relation tell us all the possible ways we can identify its tuples. As such they are an important concept for the design database schema.






Super Key

1 comments
A superkey is defined in the relational model of database organization as a set of attributes of a relation variable (relvar) for which it holds that in all relations assigned to that variable there are no two distinct tuples (rows) that have the same values for the attributes in this set. Equivalently a superkey can also be defined as a set of attributes of a relvar upon which all attributes of the relvar are functionally dependent.

A superkey is a set of columns within a table whose values can be used to uniquely identify a row. A candidate key is a minimal set of columns necessary to identify a row, this is also called a minimal superkey. For example, given an employee table, consisting of the columns employeeID, name, job, and departmentID, we could use the employeeID in combination with any or all other columns of this table to uniquely identify a row in the table. Examples of superkeys in this table would be {employeeID, Name}, {employeeID, Name, job}, and {employeeID, Name, job, departmentID}.




Composite & Compound keys

0 comments
In database design, a compound key is a key that consists of 2 or more attributes that uniquely identify an entity occurrence. Each attribute that makes up the compound key is a simple key in its own right.

This is often confused with a composite key whereby even though this is also a key that consists of 2 or more attributes that uniquely identify an entity occurrence, at least one attribute that makes up the composite key is not a simple key in its own right.

An example might be an entity that represents the modules each student is attending at University. The entity has a studentId and a moduleCode as its primary key. Each of the attributes that make up the primary key are simple keys because each represents a unique reference when identifying a student in one instance and a module in the other.
In contrast, using the same example, imagine we identified a student by their firstName + lastName. In our table representing students on modules our primary key would now be firstName + lastName + moduleCode. Because firstName + lastName represent a unique reference to a student, it is not a simple key, it is a combination of attributes used to uniquely identify a student. Therefore the primary key for this entity is a composite key.






Foreign key

0 comments
In the context of relational databases, a foreign key is a referential constraint between two tables.[1] The foreign key identifies a column or a set of columns in one (referencing) table that refers to set of columns in another (referenced) table. The columns in the referencing table must be the primary key or other candidate key in the referenced table. The values in one row of the referencing columns must occur in a single row in the referenced table. Thus, a row in the referencing table cannot contain values that don't exist in the referenced table (except potentially NULL). This way references can be made to link information together and it is an essential part of database normalizationExample:


CREATE TABLE table_name (
id INTEGER PRIMARY KEY,
col2 CHARACTER VARYING(20),
col3 INTEGER,
CONSTRAINT col3_fk FOREIGN KEY(col3)
REFERENCES other_table(key_col) ON DELETE CASCADE,
)

Example:
in the example below c.coutnry_id is the primary key in the reference table of Country while representing here in this query as a foreign key in the  customer tabel.
Select ct.cus_name,ct.cus_address,c.country_name
from customer ct
innner join country c on ct.country_id=c.coutnry_id
 
 

Alternate or Secondary Key

0 comments
With respect to relational databases, an alternate key (or secondary key) is any candidate key which is not selected to be the primary key (PK).

For example, a relational database with a table "employee" could have attributes like "employee_id", "national_insurance_number", and so on. In this case, both "employee_id" and "national_insurance_number" serve as unique identifiers for a given employee, and could thus arguably be used for a primary key. Hence, both of them are called "candidate keys". If, for example, "national_insurance_number" was chosen as the primary key, "employee_id" would become the alternate key.

Surrogate Key

0 comments
A surrogate key in a database is a unique identifier for either an entity in the modeled world or an object in the database. The surrogate key is not derived from application data.
Two definitions of a surrogate appear in the literature:
Surrogate (1)

This definition is based on that given by Hall, Owlett and Todd (1976). Here a surrogate represents an entity in the outside world. The surrogate is internally generated by the system but is nevertheless visible to the user or application.

Surrogate (2)

This definition is based on that given by Wieringa and De Jonge (1991). Here a surrogate represents an object in the database itself. The surrogate is internally generated by the system and is invisible to the user or application.





Unique key

0 comments
In relational database design, a unique key can uniquely identify each row in a table, and is closely related to the Superkey concept. A unique key comprises a single column or a set of columns. No two distinct rows in a table can have the same value (or combination of values) in those columns if NULL values are not used. Depending on its design, a table may have arbitrarily many unique keys but at most one primary key.

Unique keys do not enforce the NOT NULL constraint in practice. Because NULL is not an actual value (it represents the lack of a value), when two rows are compared, and both rows have NULL in a column, the column values are not considered to be equal. Thus, in order for a unique key to uniquely identify each row in a table, NULL values must not be used.
 Example:

CREATE TABLE table_name (


id_col INT,
col2 CHARACTER VARYING(20),
key_col SMALLINT,
CONSTRAINT key_unique UNIQUE(key_col),
)

Primary Key

0 comments
The Primary key wihtin the Relational Database is that attribute or combination of attributes which is set and defined with the purpose to uniquely identifies a row or record in a relation.
Example
CREATE TABLE Employee
(ID integer PRIMARY KEY,
Emp_Name varchar(30),
Emp_Address varchar(30));

Database Keys

0 comments

A key within the Relational database is used for specifying the uniqueness, According to Codd, Date, and all other experts, a key has only one meaning in relational theory: it is a set of one or more columns whose combined values are unique among all occurrences in a given table, some of the most used keys withing the Rrelational Database are defined below.
keys are defined in tables to access or sequence the stored data quickly and smoothly. They are also used to create links between different tables