Unix Timestamps: What They Are and How to Use Them
A Unix timestamp is the number of seconds elapsed since January 1, 1970 at 00:00:00 UTC — a fixed reference point called the Unix epoch. Right now, that number is somewhere around 1,741,600,000. It increments by 1 every second, forever, and it's the most common way to store time in databases, APIs, and log files.
If you've ever seen a field like created_at: 1741600000 and wondered what it meant, this post explains how to read it, convert it to a human-readable date, and avoid the common gotchas. You can convert any timestamp instantly with the Timestamp Converter.
What does a Unix timestamp actually represent?
Unix timestamps count seconds from midnight UTC on January 1, 1970 — the so-called "epoch." The choice of 1970 is historical: early Unix systems used a 32-bit integer for time, and the designers picked an epoch that would stay positive for decades.
A few reference points to build intuition:
| Date | Unix timestamp |
|---|---|
| 1970-01-01 00:00:00 UTC | 0 |
| 2000-01-01 00:00:00 UTC | 946,684,800 |
| 2024-01-01 00:00:00 UTC | 1,704,067,200 |
| 2026-03-11 00:00:00 UTC | ~1,741,651,200 |
The number grows by exactly 86,400 each calendar day (60 seconds × 60 minutes × 24 hours). That predictability is why developers prefer timestamps over formatted date strings — arithmetic is trivial.
Why do APIs and databases use Unix timestamps?
Unix timestamps solve a problem that formatted dates create: ambiguity. The string 03/04/2026 means March 4th in the US and April 3rd in Europe. 2026-03-04T09:00:00 looks unambiguous until you ask whether it's local time or UTC. A Unix timestamp has no timezone, no locale, no format debate — it's just an integer.
Other reasons timestamps dominate:
- Sorting is free. Higher number = later date. No parsing needed.
- Arithmetic is simple. "7 days from now" is
now + 604800. "1 hour ago" isnow - 3600. - Storage is compact. 10 bytes as a string, 4–8 bytes as an integer.
- Language-agnostic. Every programming language and database understands an integer.
MongoDB, PostgreSQL, Redis, and most REST APIs default to Unix timestamps (or milliseconds, which is the same idea scaled by 1000).
Seconds vs. milliseconds: why do some timestamps look different?
Seconds and milliseconds are the two common formats, and confusing them is one of the most frequent bugs in date-related code.
A second-precision timestamp for March 2026 looks like 1,741,651,200 (10 digits). A millisecond-precision timestamp for the same moment looks like 1,741,651,200,000 (13 digits). If you see a 13-digit number in a timestamp field, it's milliseconds.
JavaScript's Date.now() returns milliseconds. Python's time.time() returns seconds (as a float). This mismatch causes real bugs:
// JavaScript — milliseconds
const nowMs = Date.now(); // 1741651200000
const nowSec = Math.floor(Date.now() / 1000); // 1741651200
import time
now_sec = int(time.time()) # 1741651200
now_ms = int(time.time() * 1000) # 1741651200000
If you pass a JavaScript timestamp directly to a Python function expecting seconds, the result is a date in the year 57,000. Check your units before you convert.
How do you convert a Unix timestamp to a human-readable date?
Converting a timestamp to a readable date is one line in every major language. Here's the same conversion across five environments:
JavaScript (browser or Node.js):
const ts = 1741651200;
const date = new Date(ts * 1000); // multiply by 1000 for ms
console.log(date.toISOString()); // "2026-03-11T00:00:00.000Z"
console.log(date.toLocaleString('en-US', { timeZone: 'America/Chicago' }));
// "3/10/2026, 6:00:00 PM"
Python:
from datetime import datetime, timezone
ts = 1741651200
dt = datetime.fromtimestamp(ts, tz=timezone.utc)
print(dt.isoformat()) # '2026-03-11T00:00:00+00:00'
# Local time
dt_local = datetime.fromtimestamp(ts)
print(dt_local) # depends on your system timezone
PostgreSQL:
SELECT to_timestamp(1741651200);
-- 2026-03-11 00:00:00+00
MySQL:
SELECT FROM_UNIXTIME(1741651200);
-- 2026-03-11 00:00:00
Bash:
date -d @1741651200 --utc
# Wed Mar 11 00:00:00 UTC 2026
For quick one-off conversions, the Timestamp Converter handles seconds, milliseconds, and microseconds automatically.
How do you get the current Unix timestamp?
Every language has a built-in way to get "now" as a timestamp:
// JavaScript
const now = Math.floor(Date.now() / 1000);
# Python 3
import time
now = int(time.time())
# Bash / shell
date +%s
-- PostgreSQL
SELECT EXTRACT(EPOCH FROM NOW())::bigint;
-- MySQL
SELECT UNIX_TIMESTAMP();
// Go
import "time"
now := time.Now().Unix()
// Rust
use std::time::{SystemTime, UNIX_EPOCH};
let now = SystemTime::now().duration_since(UNIX_EPOCH).unwrap().as_secs();
What is the year 2038 problem?
The year 2038 problem is a real overflow issue, not theoretical. It happens when systems store Unix timestamps as a 32-bit signed integer (int32). The max value is 2,147,483,647, which corresponds to January 19, 2038 at 03:14:07 UTC. After that moment, the counter rolls over to a large negative number — which most systems interpret as December 13, 1901.
The fix is straightforward: use 64-bit integers (int64), which don't overflow until the year 292,277,026,596 — effectively never. Most modern systems already do this. Linux kernel 5.6+ (released 2020) uses 64-bit timestamps on 32-bit systems. PostgreSQL uses 64-bit timestamps by default.
Where you still need to watch out: embedded systems, old IoT firmware, legacy databases with INT columns storing timestamps, and any C code using the time_t type on 32-bit platforms.
How do you add or subtract time from a Unix timestamp?
Unix timestamps make date arithmetic simple because the unit is seconds. Common offsets:
const now = Math.floor(Date.now() / 1000);
const oneHourLater = now + 3600; // 60 * 60
const oneDayLater = now + 86400; // 60 * 60 * 24
const oneWeekLater = now + 604800; // 60 * 60 * 24 * 7
const thirtyDaysAgo = now - 2592000; // 60 * 60 * 24 * 30
// Check if a token expires within the next 5 minutes
const tokenExpiry = 1741651800;
const isExpiringSoon = (tokenExpiry - now) < 300;
This works perfectly for durations in seconds. Where it breaks down: calendar-aware arithmetic. "One month from now" isn't 30 days — it's the same date next month, which could be 28, 29, 30, or 31 days depending on the month. For calendar math, reach for a library like date-fns, Luxon, or Python's dateutil instead of raw seconds.
Quick reference
| Task | JavaScript | Python |
|---|---|---|
| Current timestamp (seconds) | Math.floor(Date.now() / 1000) |
int(time.time()) |
| Current timestamp (ms) | Date.now() |
int(time.time() * 1000) |
| Timestamp → Date object | new Date(ts * 1000) |
datetime.fromtimestamp(ts) |
| Date → timestamp | Math.floor(date.getTime() / 1000) |
int(dt.timestamp()) |
| Add 1 hour | ts + 3600 |
ts + 3600 |
| Add 1 day | ts + 86400 |
ts + 86400 |
| Format as ISO 8601 | new Date(ts*1000).toISOString() |
datetime.fromtimestamp(ts, tz=timezone.utc).isoformat() |
Seconds vs. milliseconds cheat sheet:
- 10-digit timestamp → seconds (most APIs, Python
time.time()) - 13-digit timestamp → milliseconds (JavaScript
Date.now(), many NoSQL stores) - 16-digit timestamp → microseconds (PostgreSQL internal, some Python libraries)
The Timestamp Converter accepts all three formats and converts to UTC, your local timezone, or any IANA timezone like America/New_York. Paste a timestamp and it handles the rest.
Try the tool mentioned in this article:
Open Tool →