All this content was explanined on my newsletter TechInsightsWeekly. This project demonstrates the implementation of caching in a Node.js API using Redis, showcasing the performance benefits that caching can bring to your application.
- Node.js (version 14 or higher)
- Redis Server
- npm or yarn
- Clone the repository:
git clone [your-repository]
cd api-caching-nodejs
- Install dependencies:
npm install
- Configure Redis and OpenTelemetry with Docker:
# Start all containers
docker-compose up -d
- Configure environment variables:
- The
.env
file is already configured with default settings - You can adjust the settings as needed
- Start all containers (if not already running):
docker-compose up -d
- Start the server:
npm run start
- The API will be available at
http://localhost:3000
The application is instrumented with OpenTelemetry, providing:
- Request tracing for all endpoints
- Cache hit/miss events
- Error tracking
- Performance metrics
- Cache hit ratio
- Request latency
- Error rates
- Redis connection status
- Prometheus metrics are available at
http://localhost:8889/metrics
- Traces are logged to the OpenTelemetry collector
- View detailed logs in the collector container
To stop all containers:
docker-compose down
GET /api/posts
: Returns a list of postsDELETE /api/cache
: Clears the cache
# First request to /api/posts
curl http://localhost:3000/api/posts
Server logs will show:
Fetching from external API
Response time: ~200-300ms (varies based on network conditions)
# Second request to /api/posts (immediately after first request)
curl http://localhost:3000/api/posts
Server logs will show:
Serving from cache
Response time: ~1-5ms (much faster!)
# Clear the cache
curl -X DELETE http://localhost:3000/api/cache
Response:
{
"message": "Cache cleared successfully"
}
# Request after cache clear
curl http://localhost:3000/api/posts
Server logs will show:
Fetching from external API
-
Initial State (No Cache)
- First request hits the external API
- Data is stored in Redis cache
- Response time is slower
-
Cached State
- Subsequent requests serve data from Redis
- No external API calls
- Much faster response times
- Cache persists for 1 hour (TTL)
-
Cache Clear
- DELETE request to
/api/cache
removes cached data - Next request will fetch fresh data from external API
- DELETE request to
-
Cache Expiration
- After 1 hour (TTL), cache automatically expires
- Next request will fetch fresh data from external API
Request 1 (No Cache)
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Client │────▶│ Server │────▶│ External API│
└─────────────┘ └─────────────┘ └─────────────┘
│
▼
┌─────────────┐
│ Redis │
│ Cache │
└─────────────┘
Request 2 (With Cache)
┌─────────────┐ ┌─────────────┐
│ Client │────▶│ Server │
└─────────────┘ └─────────────┘
│
▼
┌─────────────┐
│ Redis │
│ Cache │
└─────────────┘
- Reduced Latency: Cached data is served directly from memory, eliminating the need for database queries or external API calls.
- Response Time: Cached responses are served in milliseconds, while external API calls can take hundreds of milliseconds.
- Reduced Load: Cache significantly reduces the load on the origin server, allowing it to handle more requests.
- Resource Efficiency: Less processing is required on the origin server, saving computational resources.
- Resilience: Even if the external API is temporarily unavailable, the cache can continue serving data.
- Consistency: Frequently accessed data remains available even in high-demand situations.
- Cost Reduction: Fewer external API calls mean less resource consumption and potentially lower costs for external services.
- Infrastructure Optimization: Reduced need to scale infrastructure due to decreased load.
To demonstrate cache efficiency, you can:
-
Make a first call to the
/api/posts
endpoint:- Observe the "Fetching from external API" log
- Note the response time
-
Make a second call immediately:
- Observe the "Serving from cache" log
- Compare the response time with the first call
-
Clear the cache using
DELETE /api/cache
and repeat the test
This project implements a basic caching strategy with:
- TTL (Time To Live) of 1 hour
- In-memory cache using Redis
- Manual cache invalidation
- Appropriate TTL: Configure TTL based on data update frequency
- Invalidation: Implement appropriate cache invalidation strategies
- Monitoring: Monitor cache usage and adjust as needed
- Fallback: Always have a fallback plan in case the cache fails