About This Topic
Systems don't end when they're built—continuing to operate them is what matters. This project focused on designs that minimize effort from deployment through operations.
Auto-Deployment to Cloud
Just upload source code, and it's automatically deployed to the cloud environment.
Developer sends code to Git repository
Cloud platform detects changes
Install dependencies, compile
Roll out new version
Verify normal operation and complete
Auto-Deployment Benefits
| Benefit | Description |
|---|---|
| No server provisioning | No physical servers or VPS needed |
| Reduced setup effort | No OS or middleware configuration |
| Immediate reflection | Minutes from code push to production |
| Rollback capability | Can revert to previous version if issues |
Auto-Scaling
Even as usage increases, the cloud automatically adjusts resources.
Minimal configuration for cost savings. 1 instance handles processing
2-3 instances process in parallel
Automatically scale out. Distribute request processing
Scaling Points
| Situation | Instance Count | Response |
|---|---|---|
| Night/Weekend | Minimum (0-1) | Minimize cost |
| Normal business hours | 2-3 | Stable response |
| Month-end aggregation | Auto-increase | Prevent processing delays |
Serverless architecture means you pay only for what you use. No charges during periods without access, making it cost-effective.
Configuration via Environment Variables
Settings are managed through environment variables. Adjust behavior without changing code.
Required Settings
| Variable | Description | Example |
|---|---|---|
| RUN_TOKEN | Secret token for authentication | 32+ character string |
| FREEE_CLIENT_ID | Invoicing service client ID | ID issued by API |
| FREEE_CLIENT_SECRET | Invoicing service client secret | Secret key issued by API |
| FREEE_REFRESH_TOKEN | Refresh token for token renewal | Token issued by API |
Optional Settings (with defaults)
| Variable | Description | Default |
|---|---|---|
| RATE_LIMIT_PER_MINUTE | Request limit per minute | 60 |
| CONCURRENT_FREEE_CALLS | Concurrent API calls | 2 |
| RETRY_MAX_ATTEMPTS | Maximum retry attempts | 5 |
| REQUEST_TIMEOUT_MS | Timeout (milliseconds) | 30000 |
Testing Settings
| Variable | Description | Purpose |
|---|---|---|
| FREEE_DRY_RUN | Dry-run mode (true/false) | Test without production send |
| ENABLE_STRUCTURED_LOGGING | Detailed log output (true/false) | Enable during debugging |
Timeout Settings
Appropriate timeouts are set for long-running processes.
Begin processing, set timer
Run API calls, etc.
Exceeded set time?
Timeout Setting Guidelines
| Processing | Recommended | Reason |
|---|---|---|
| Single API call | 30 seconds | Usually completes in seconds |
| Batch (~100 items) | 2 minutes | Account for paging |
| Large scale (~500 items) | 5 minutes | Use cloud's full limit |
Distributed Cache Utilization
Using Redis (Upstash) distributed cache enables:
| Function | Description | Benefit |
|---|---|---|
| Idempotency cache | Keep processed keys for 7 days | Shared across instances |
| Token cache | Keep access token until expiry | Reduce API calls |
| Distributed lock | Prevent concurrent updates | Ensure data consistency |
Distributed cache is optional but recommended. Without it, only memory cache operates, but cache isn't shared between instances, risking duplicate processing.
Operations Checklist
Before Deployment
- [ ] All required environment variables set?
- [ ] Verified operation in dry-run mode?
- [ ] Token expiry sufficient?
During Operations
- [ ] Check error logs periodically
- [ ] Rate limit not being reached?
- [ ] Response times normal?
During Trouble
- [ ] Identify problem area with structured logs
- [ ] Switch to dry-run mode if needed
- [ ] Test single item after configuration changes
What This Design Achieves
For Operations
- Minimize deployment cost: No servers, minimal configuration
- Reduce operational load: Auto-scaling, auto-deployment
- Flexible adjustment: Customize behavior via environment variables
For Field Staff
- Stable service: No delays even at month-end
- Immediate usability: No complex setup
- Easy problem response: Understand situation via logs and health checks