Skip to main content

SQL INSERT INTO for Inserting Data

Every database starts empty. Before you can query, filter, or join anything, data needs to get into your tables. The SQL INSERT INTO statement is the fundamental command for adding new rows to a table, and it is one of the first things every developer must understand when working with relational databases.

While the basic syntax is straightforward, there are several variations and nuances that matter in practice. You can insert a single row at a time, batch multiple rows in one statement for efficiency, or even populate a table directly from another query's results. This guide covers each approach with clear examples, explains how auto-increment columns fit into the picture, highlights common mistakes, and gives you the confidence to insert data correctly in any situation.

The Sample Schema

All examples in this guide use the following tables:

CREATE TABLE customers (
customer_id INT PRIMARY KEY,
first_name VARCHAR(50) NOT NULL,
last_name VARCHAR(50) NOT NULL,
email VARCHAR(100),
city VARCHAR(50) DEFAULT 'Unknown',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE TABLE products (
product_id INT PRIMARY KEY,
product_name VARCHAR(100) NOT NULL,
category VARCHAR(50),
price DECIMAL(10,2) NOT NULL,
in_stock BOOLEAN DEFAULT TRUE
);

CREATE TABLE orders (
order_id INT PRIMARY KEY,
customer_id INT NOT NULL,
order_date DATE DEFAULT CURRENT_DATE,
total_amount DECIMAL(10,2)
);

CREATE TABLE archived_customers (
customer_id INT PRIMARY KEY,
first_name VARCHAR(50),
last_name VARCHAR(50),
email VARCHAR(100),
city VARCHAR(50)
);

Single Row Insert

The most basic form of INSERT INTO adds one row to a table by specifying the target table, the columns you want to populate, and the corresponding values.

Full Syntax

INSERT INTO table_name (column1, column2, column3)
VALUES (value1, value2, value3);

Example: Inserting a Complete Row

INSERT INTO customers (customer_id, first_name, last_name, email, city)
VALUES (1, 'Alice', 'Martin', 'alice@example.com', 'New York');

To verify the insert:

SELECT * FROM customers WHERE customer_id = 1;
customer_idfirst_namelast_nameemailcitycreated_at
1AliceMartinalice@example.comNew York2026-02-25 09:28:46

The created_at column was not specified in the INSERT, so it received its default value (CURRENT_TIMESTAMP).

Omitting the Column List

If you provide values for every column in the table, in the exact order they were defined, you can omit the column list:

INSERT INTO products
VALUES (101, 'Wireless Mouse', 'Electronics', 25.99, TRUE);

This works, but it is fragile and discouraged in production code.

warning

Omitting the column list means your INSERT depends on the table's column order. If someone later adds a column to the table or reorders columns during a migration, your statement will either fail or silently insert data into the wrong columns. Always explicitly list the columns.

Bad:

INSERT INTO products VALUES (101, 'Wireless Mouse', 'Electronics', 25.99, TRUE);

Good:

INSERT INTO products (product_id, product_name, category, price, in_stock)
VALUES (101, 'Wireless Mouse', 'Electronics', 25.99, TRUE);

Inserting with Default Values

When you omit a column from the INSERT statement, the database uses that column's default value (or NULL if no default is defined).

INSERT INTO customers (customer_id, first_name, last_name, email)
VALUES (2, 'Bob', 'Jones', 'bob@example.com');

Let's check:

SELECT * FROM customers WHERE customer_id = 2;
customer_idfirst_namelast_nameemailcitycreated_at
2BobJonesbob@example.comUnknown2026-02-25 09:29:36
note

city received its default value 'Unknown', and created_at received CURRENT_TIMESTAMP.

You can also use the DEFAULT keyword explicitly:

INSERT INTO customers (customer_id, first_name, last_name, email, city, created_at)
VALUES (3, 'Carol', 'Smith', 'carol@example.com', DEFAULT, DEFAULT);

This inserts 'Unknown' for city and the current timestamp for created_at, just as if those columns had been omitted.

Inserting NULL Values

If a column allows NULL and you want to explicitly store a null value, pass NULL as the value:

INSERT INTO customers (customer_id, first_name, last_name, email, city)
VALUES (4, 'Dave', 'Wilson', NULL, 'Denver');

Let's check:

SELECT * FROM customers WHERE customer_id = 4;
customer_idfirst_namelast_nameemailcitycreated_at
4DaveWilsonNULLDenver2026-02-25 09:30:10
danger

If a column is defined as NOT NULL, attempting to insert NULL will raise an error:

INSERT INTO customers (customer_id, first_name, last_name)
VALUES (5, NULL, 'Turner');
ERROR: null value in column "first_name" violates not-null constraint

Always check your column constraints before inserting. Columns with NOT NULL must receive an explicit value (or have a default defined).

Data Type Rules for Values

Values in the VALUES clause must match the column's data type:

Column TypeValid Value ExamplesInvalid
INT42, 0, -7'hello', 12.5
VARCHAR(50)'Alice', 'New York'42 (some DBs auto-cast)
DECIMAL(10,2)25.99, 100, 0.50'cheap'
DATE'2024-06-15''not-a-date'
BOOLEANTRUE, FALSE, 1, 0'maybe'
TIMESTAMP'2024-06-15 10:30:00', CURRENT_TIMESTAMP42

String and date values must be enclosed in single quotes. Numeric values are written without quotes.

Wrong:

-- Number in quotes, string without quotes
INSERT INTO products (product_id, product_name, price)
VALUES ('101', Wireless Mouse, '25.99');

Correct:

INSERT INTO products (product_id, product_name, price)
VALUES (101, 'Wireless Mouse', 25.99);
tip

While some databases perform implicit type casting (e.g., '101' to 101), relying on this makes your code less portable and harder to understand. Use the correct literal format for each data type.

Multi-Row Insert

Inserting rows one at a time works fine for small amounts of data, but it is slow when you need to load dozens, hundreds, or thousands of rows. Multi-row insert lets you add multiple rows in a single statement.

Syntax

INSERT INTO table_name (column1, column2, column3)
VALUES
(value1a, value2a, value3a),
(value1b, value2b, value3b),
(value1c, value2c, value3c);

Each set of parentheses represents one row. They are separated by commas.

Example: Inserting Multiple Products

INSERT INTO products (product_id, product_name, category, price, in_stock)
VALUES
(101, 'Wireless Mouse', 'Electronics', 25.99, TRUE),
(102, 'USB-C Cable', 'Electronics', 9.99, TRUE),
(103, 'Notebook A5', 'Stationery', 4.50, TRUE),
(104, 'Mechanical Keyboard', 'Electronics', 89.99, FALSE),
(105, 'Desk Lamp', 'Furniture', 34.50, TRUE),
(106, 'Pen Pack', 'Stationery', 2.99, TRUE);

To verify:

SELECT product_id, product_name, price FROM products ORDER BY product_id;
product_idproduct_nameprice
101Wireless Mouse25.99
102USB-C Cable9.99
103Notebook A54.50
104Mechanical Keyboard89.99
105Desk Lamp34.50
106Pen Pack2.99

All six rows were inserted with a single statement.

Why Multi-Row Insert Is Better

Consider the difference between six individual inserts and one multi-row insert:

Slow (six round trips to the database):

INSERT INTO products (product_id, product_name, category, price) VALUES (101, 'Wireless Mouse', 'Electronics', 25.99);
INSERT INTO products (product_id, product_name, category, price) VALUES (102, 'USB-C Cable', 'Electronics', 9.99);
INSERT INTO products (product_id, product_name, category, price) VALUES (103, 'Notebook A5', 'Stationery', 4.50);
INSERT INTO products (product_id, product_name, category, price) VALUES (104, 'Mechanical Keyboard', 'Electronics', 89.99);
INSERT INTO products (product_id, product_name, category, price) VALUES (105, 'Desk Lamp', 'Furniture', 34.50);
INSERT INTO products (product_id, product_name, category, price) VALUES (106, 'Pen Pack', 'Stationery', 2.99);

Fast (one round trip):

INSERT INTO products (product_id, product_name, category, price)
VALUES
(101, 'Wireless Mouse', 'Electronics', 25.99),
(102, 'USB-C Cable', 'Electronics', 9.99),
(103, 'Notebook A5', 'Stationery', 4.50),
(104, 'Mechanical Keyboard', 'Electronics', 89.99),
(105, 'Desk Lamp', 'Furniture', 34.50),
(106, 'Pen Pack', 'Stationery', 2.99);

The multi-row version sends a single command to the database, which reduces network overhead, locking overhead, and transaction log writes. On large inserts, the performance difference can be 10x or more.

tip

Most databases support multi-row insert (PostgreSQL, MySQL, SQLite, SQL Server 2008+, Oracle 12c+). For older Oracle versions, you can use the INSERT ALL syntax instead:

-- Oracle-specific multi-row insert
INSERT ALL
INTO products (product_id, product_name, price) VALUES (101, 'Wireless Mouse', 25.99)
INTO products (product_id, product_name, price) VALUES (102, 'USB-C Cable', 9.99)
SELECT 1 FROM DUAL;

Batch Size Limits

While multi-row inserts are efficient, databases have limits on how many rows or how much data you can include in a single statement:

DatabaseTypical Limit
PostgreSQLNo hard row limit (limited by max_query_length, typically 1GB)
MySQLLimited by max_allowed_packet (default 4MB-64MB)
SQL ServerMax 1,000 rows per INSERT ... VALUES
SQLiteMax 500 rows by default (configurable with SQLITE_MAX_COMPOUND_SELECT)

For very large datasets (tens of thousands of rows), insert in batches of 500 to 1,000 rows per statement rather than trying to squeeze everything into one gigantic query.

INSERT INTO ... SELECT

One of the most powerful variations of INSERT INTO lets you populate a table using the results of a SELECT query. Instead of hardcoding values, you pull data from existing tables, transform it if needed, and insert it directly.

Syntax

INSERT INTO target_table (column1, column2, column3)
SELECT columnA, columnB, columnC
FROM source_table
WHERE condition;

There is no VALUES keyword. The SELECT takes its place, and each row returned by the query becomes a new row in the target table.

Example: Archiving Customers

Suppose you want to copy all New York customers into an archive table:

INSERT INTO archived_customers (customer_id, first_name, last_name, email, city)
SELECT customer_id, first_name, last_name, email, city
FROM customers
WHERE city = 'New York';

Let's check:

SELECT * FROM archived_customers;
customer_idfirst_namelast_nameemailcity
1AliceMartinalice@example.comNew York

The SELECT found one matching row (Alice in New York) and inserted it into archived_customers.

Example: Creating Summary Data

You can use INSERT INTO ... SELECT with aggregations, joins, and any valid SELECT syntax:

CREATE TABLE category_summary (
category VARCHAR(50) PRIMARY KEY,
product_count INT,
avg_price DECIMAL(10,2),
max_price DECIMAL(10,2)
);

INSERT INTO category_summary (category, product_count, avg_price, max_price)
SELECT
category,
COUNT(*),
ROUND(AVG(price), 2),
MAX(price)
FROM products
GROUP BY category;
SELECT * FROM category_summary ORDER BY category;
categoryproduct_countavg_pricemax_price
Electronics341.9989.99
Furniture134.5034.50
Stationery23.754.50

Example: Copying Between Tables with Transformation

You can transform data during the insert by using expressions, functions, or computed columns in the SELECT:

CREATE TABLE customer_emails (
id INT PRIMARY KEY,
full_name VARCHAR(100),
email VARCHAR(100),
name_upper VARCHAR(100)
);

INSERT INTO customer_emails (id, full_name, email, name_upper)
SELECT
customer_id,
first_name || ' ' || last_name,
COALESCE(email, 'no-email@placeholder.com'),
UPPER(first_name || ' ' || last_name)
FROM customers;

Let's check:

SELECT * FROM customer_emails;
idfull_nameemailname_upper
1Alice Martinalice@example.comALICE MARTIN
2Bob Jonesbob@example.comBOB JONES
4Dave Wilsonno-email@placeholder.comDAVE WILSON

Dave had a NULL email, which COALESCE replaced with a placeholder value during the insert.

Column Matching Rules

The SELECT columns must match the INSERT columns in count, order, and compatible data types, just like with VALUES:

Wrong (column count mismatch):

INSERT INTO archived_customers (customer_id, first_name, last_name)
SELECT customer_id, first_name, last_name, email -- 4 columns vs 3 targets
FROM customers;
ERROR: INSERT has more expressions than target columns

Correct:

INSERT INTO archived_customers (customer_id, first_name, last_name)
SELECT customer_id, first_name, last_name -- 3 columns match 3 targets
FROM customers;
info

The column names in the SELECT do not need to match the target column names. Only the position and data type compatibility matter. The first selected column goes into the first listed target column, the second into the second, and so on.

-- This works even though the column names differ
INSERT INTO archived_customers (customer_id, first_name, last_name)
SELECT id, given_name, surname
FROM some_other_table;

Handling Auto-Increment and Serial Columns

Most real-world tables use an auto-increment (MySQL, SQLite) or serial (PostgreSQL) column for the primary key. This means the database generates a unique ID automatically whenever you insert a row, and you should not provide a value for that column.

Creating Tables with Auto-Increment

The syntax varies by database:

-- PostgreSQL
CREATE TABLE orders (
order_id SERIAL PRIMARY KEY, -- or GENERATED ALWAYS AS IDENTITY
customer_id INT NOT NULL,
order_date DATE DEFAULT CURRENT_DATE,
total_amount DECIMAL(10,2)
);

-- MySQL
CREATE TABLE orders (
order_id INT AUTO_INCREMENT PRIMARY KEY,
customer_id INT NOT NULL,
order_date DATE DEFAULT (CURRENT_DATE),
total_amount DECIMAL(10,2)
);

-- SQLite
CREATE TABLE orders (
order_id INTEGER PRIMARY KEY AUTOINCREMENT,
customer_id INT NOT NULL,
order_date DATE DEFAULT CURRENT_DATE,
total_amount DECIMAL(10,2)
);

-- SQL Server
CREATE TABLE orders (
order_id INT IDENTITY(1,1) PRIMARY KEY,
customer_id INT NOT NULL,
order_date DATE DEFAULT GETDATE(),
total_amount DECIMAL(10,2)
);

Inserting Without the Auto-Increment Column

Simply omit the auto-generated column from both the column list and the values:

INSERT INTO orders (customer_id, order_date, total_amount)
VALUES (1, '2024-06-15', 150.00);

INSERT INTO orders (customer_id, order_date, total_amount)
VALUES (1, '2024-06-18', 299.99);

INSERT INTO orders (customer_id, total_amount)
VALUES (2, 85.50);
SELECT * FROM orders;
order_idcustomer_idorder_datetotal_amount
112024-06-15150.00
212024-06-18299.99
322024-06-1585.50

The order_id values (1, 2, 3) were assigned automatically. The third order's order_date used the default value since it was omitted.

Retrieving the Generated ID

After inserting a row with an auto-generated ID, you often need to know what ID was assigned. Each database has its own mechanism:

-- PostgreSQL: RETURNING clause
INSERT INTO orders (customer_id, total_amount)
VALUES (3, 420.00)
RETURNING order_id;
-- Returns: 4

-- MySQL: LAST_INSERT_ID()
INSERT INTO orders (customer_id, total_amount)
VALUES (3, 420.00);
SELECT LAST_INSERT_ID();
-- Returns: 4

-- SQLite: last_insert_rowid()
INSERT INTO orders (customer_id, total_amount)
VALUES (3, 420.00);
SELECT last_insert_rowid();
-- Returns: 4

-- SQL Server: SCOPE_IDENTITY() or OUTPUT clause
INSERT INTO orders (customer_id, total_amount)
OUTPUT INSERTED.order_id
VALUES (3, 420.00);
-- Returns: 4
tip

PostgreSQL's RETURNING clause is especially powerful because you can return any columns from the inserted row, not just the ID:

INSERT INTO orders (customer_id, total_amount)
VALUES (3, 420.00)
RETURNING order_id, order_date, total_amount;
order_idorder_datetotal_amount
42024-06-15420.00

What Happens If You Try to Insert the Auto-Increment Column

The behavior depends on the database and the column definition:

PostgreSQL with GENERATED ALWAYS AS IDENTITY:

INSERT INTO orders (order_id, customer_id, total_amount)
VALUES (99, 1, 50.00);
ERROR: cannot insert a non-DEFAULT value into column "order_id"
DETAIL: Column "order_id" is an identity column defined as GENERATED ALWAYS.

PostgreSQL with SERIAL or MySQL with AUTO_INCREMENT:

Explicitly providing a value is usually allowed and the sequence/counter adjusts accordingly, but this can cause conflicts if the manually inserted ID collides with a future auto-generated value.

danger

Avoid manually inserting values into auto-increment columns unless you have a specific reason (such as data migration). Manually inserted IDs can collide with future auto-generated values, causing unique constraint violations that are difficult to debug.

If you must insert a specific ID, reset the sequence afterward:

-- PostgreSQL: reset the sequence
SELECT setval('orders_order_id_seq', (SELECT MAX(order_id) FROM orders));

-- MySQL: reset auto_increment
ALTER TABLE orders AUTO_INCREMENT = 100;

Multi-Row Insert with Auto-Increment

Multi-row inserts work normally with auto-increment columns. Just omit the auto-generated column:

INSERT INTO orders (customer_id, order_date, total_amount)
VALUES
(1, '2024-07-01', 200.00),
(2, '2024-07-02', 350.00),
(3, '2024-07-03', 125.00);

Each row receives a unique, sequentially assigned ID.

INSERT INTO ... SELECT with Auto-Increment

The same principle applies. Omit the auto-increment column from the target column list:

-- Copy all orders from customer 1, assigning new order IDs automatically
INSERT INTO orders (customer_id, order_date, total_amount)
SELECT customer_id, order_date, total_amount
FROM orders
WHERE customer_id = 1;

The new rows get fresh auto-generated order_id values. The original rows are unaffected.

Handling Duplicate Keys

When inserting data, you may encounter conflicts with existing primary key or unique constraint values. Different databases offer different strategies to handle this gracefully.

PostgreSQL: ON CONFLICT

-- Insert or update if the customer_id already exists
INSERT INTO customers (customer_id, first_name, last_name, email, city)
VALUES (1, 'Alice', 'Martin', 'alice.new@example.com', 'Boston')
ON CONFLICT (customer_id)
DO UPDATE SET
email = EXCLUDED.email,
city = EXCLUDED.city;

If customer 1 already exists, this updates the email and city instead of failing.

MySQL: ON DUPLICATE KEY UPDATE

INSERT INTO customers (customer_id, first_name, last_name, email, city)
VALUES (1, 'Alice', 'Martin', 'alice.new@example.com', 'Boston')
ON DUPLICATE KEY UPDATE
email = VALUES(email),
city = VALUES(city);

Ignoring Duplicates Silently

If you want to skip rows that would cause a conflict without raising an error:

-- PostgreSQL
INSERT INTO customers (customer_id, first_name, last_name, email)
VALUES (1, 'Alice', 'Martin', 'alice@example.com')
ON CONFLICT (customer_id) DO NOTHING;

-- MySQL
INSERT IGNORE INTO customers (customer_id, first_name, last_name, email)
VALUES (1, 'Alice', 'Martin', 'alice@example.com');
info

Handling duplicates is particularly useful when loading data from external sources where you cannot guarantee uniqueness beforehand, or when implementing idempotent data pipelines that may run the same insert multiple times.

Common Mistakes

Mistake 1: Column and Value Count Mismatch

The number of columns listed must exactly match the number of values provided.

Wrong:

INSERT INTO customers (customer_id, first_name, last_name, email)
VALUES (5, 'Eve', 'Turner');
-- 4 columns, 3 values
ERROR: INSERT has more target columns than expressions

Correct:

INSERT INTO customers (customer_id, first_name, last_name, email)
VALUES (5, 'Eve', 'Turner', 'eve@example.com');

Mistake 2: Violating NOT NULL Constraints

Omitting a NOT NULL column that has no default value causes an error.

Wrong:

-- "first_name" is NOT NULL and has no default
INSERT INTO customers (customer_id, last_name)
VALUES (6, 'Baker');
ERROR: null value in column "first_name" violates not-null constraint

Correct:

INSERT INTO customers (customer_id, first_name, last_name)
VALUES (6, 'Grace', 'Baker');

Mistake 3: Forgetting Single Quotes Around Strings

String and date literals must be enclosed in single quotes. Double quotes have a different meaning in SQL (they reference identifiers like table or column names).

Wrong:

INSERT INTO customers (customer_id, first_name, last_name)
VALUES (7, "Hank", "Miller");
-- Double quotes: interpreted as column names, not string values
ERROR: column "Hank" does not exist

Correct:

INSERT INTO customers (customer_id, first_name, last_name)
VALUES (7, 'Hank', 'Miller');

Mistake 4: Inserting Strings with Apostrophes

If a string value contains a single quote (apostrophe), you must escape it by doubling it:

Wrong:

INSERT INTO customers (customer_id, first_name, last_name)
VALUES (8, 'O'Brien', 'Test');
-- The apostrophe in O'Brien breaks the string
ERROR: syntax error at or near "Brien"

Correct:

INSERT INTO customers (customer_id, first_name, last_name)
VALUES (8, 'O''Brien', 'Test');
-- Two single quotes '' represent a literal apostrophe
tip

When building INSERT statements programmatically, always use parameterized queries (prepared statements) instead of string concatenation. This handles escaping automatically and prevents SQL injection attacks:

# Python example with parameterized query
cursor.execute(
"INSERT INTO customers (customer_id, first_name, last_name) VALUES (%s, %s, %s)",
(8, "O'Brien", "Test")
)

Mistake 5: Wrong Column Order in INSERT INTO ... SELECT

The columns must align by position between the INSERT column list and the SELECT output:

Wrong (swapped columns):

INSERT INTO archived_customers (customer_id, first_name, last_name, email, city)
SELECT customer_id, last_name, first_name, email, city -- first/last swapped!
FROM customers;

This runs without error, but first names end up in the last_name column and vice versa.

Correct:

INSERT INTO archived_customers (customer_id, first_name, last_name, email, city)
SELECT customer_id, first_name, last_name, email, city
FROM customers;
warning

The database does not check whether column names match between the INSERT list and the SELECT output. It only checks that the count and types are compatible. Misaligned columns will silently insert data into the wrong fields.

Performance Tips

Use Multi-Row Inserts

As discussed earlier, batching rows into a single INSERT statement reduces network round trips and transaction overhead significantly.

Wrap Bulk Inserts in a Transaction

If you are inserting many rows across multiple statements, wrap them in an explicit transaction:

BEGIN;

INSERT INTO products (product_id, product_name, category, price)
VALUES (201, 'Monitor Stand', 'Furniture', 45.00);

INSERT INTO products (product_id, product_name, category, price)
VALUES (202, 'Cable Organizer', 'Accessories', 12.99);

-- ... more inserts ...

COMMIT;

Without an explicit transaction, each INSERT is auto-committed individually, which is slower. Wrapping them in a transaction lets the database commit all changes at once.

Disable Indexes During Large Loads

For very large data loads (millions of rows), consider temporarily disabling indexes, loading the data, and then rebuilding the indexes. Index maintenance during inserts adds overhead that compounds with volume.

Use COPY or Bulk Load for Massive Datasets

When loading millions of rows, INSERT statements (even multi-row ones) are not the fastest option. Use your database's bulk loading tool:

DatabaseBulk Load Command
PostgreSQLCOPY table FROM '/path/to/file.csv'
MySQLLOAD DATA INFILE '/path/to/file.csv'
SQL ServerBULK INSERT table FROM '/path/to/file.csv'
SQLite.import /path/to/file.csv table

These tools bypass much of the SQL parsing and logging overhead, making them orders of magnitude faster for large loads.

Quick Reference

VariationSyntaxUse Case
Single rowINSERT INTO t (cols) VALUES (vals)Adding one record
Multi-rowINSERT INTO t (cols) VALUES (v1), (v2), (v3)Batch loading small to medium sets
From queryINSERT INTO t (cols) SELECT cols FROM sourceCopying, archiving, transforming data
With defaultsOmit column or use DEFAULT keywordLetting the database fill in values
With conflict handlingON CONFLICT / ON DUPLICATE KEYUpserts, idempotent loads

Summary

The SQL INSERT INTO statement is the gateway for getting data into your tables. Whether you are adding a single record through a form submission, batch-loading product data, or archiving old records from one table to another, INSERT has a variation that fits your needs.

Key takeaways:

  • Always list your columns explicitly in the INSERT statement. Omitting the column list makes your query fragile and prone to breaking when the schema changes.
  • Use multi-row inserts to batch data efficiently. A single statement with multiple value sets is dramatically faster than many individual inserts.
  • INSERT INTO ... SELECT is a powerful tool for copying, transforming, and archiving data between tables without pulling it out of the database.
  • Auto-increment columns should be omitted from your INSERT statements. Let the database generate unique IDs automatically, and use database-specific functions (RETURNING, LAST_INSERT_ID(), SCOPE_IDENTITY()) to retrieve the generated value.
  • Handle duplicates gracefully with ON CONFLICT (PostgreSQL) or ON DUPLICATE KEY UPDATE (MySQL) when inserting data that might conflict with existing records.
  • Watch your data types, quote strings with single quotes, escape apostrophes by doubling them, and always verify column order matches between your INSERT list and VALUES or SELECT.
  • For massive data loads, move beyond INSERT and use bulk loading tools like COPY or LOAD DATA INFILE.

Mastering INSERT INTO in all its forms gives you complete control over how data enters your database, from single records to million-row migrations.