Compare commits

...

37 Commits

Author SHA1 Message Date
7c08e90a56 fix: normalize transition timestamps across terminal
Some checks failed
Deploy Pounce (Auto) / deploy (push) Has been cancelled
Convert timezone-aware datetimes to naive UTC before persisting (prevents Postgres 500s),
add deletion_date migrations, and unify transition countdown + tracked-state across Drops,
Watchlist, and Analyze panel.
2025-12-21 18:14:25 +01:00
719f4c0724 feat: Canonical status metadata across domains and drops 2025-12-21 17:39:47 +01:00
1a63533333 ui: Show status banner in AnalyzePanel for watchlist too 2025-12-21 17:20:51 +01:00
bf579b93e6 fix: Prevent admin user-delete 500 via soft-delete fallback 2025-12-21 17:18:31 +01:00
f1cb360e4f feat: Add LLM Gateway config to deployment pipeline 2025-12-21 17:10:49 +01:00
9d99e6ee0a perf: Batch verify drops status + bulk DB updates 2025-12-21 16:53:30 +01:00
f36d55f814 perf: Bulk insert drops + add critical DB indexes 2025-12-21 16:13:46 +01:00
93bd23c1cd perf: Harden zone sync + scheduler concurrency 2025-12-21 16:07:35 +01:00
54fcfd80cb chore: Trigger deploy after runner re-register 2025-12-21 16:00:51 +01:00
7415d0b696 chore: Trigger deploy after runner fix 2025-12-21 15:59:32 +01:00
9205536bf2 perf: Reuse pooled http client for RDAP 2025-12-21 15:50:59 +01:00
4ec86789cf chore: Validate runner checkout reliability 2025-12-21 15:45:50 +01:00
fd2625a34d perf: Separate scheduler + harden deploy sync 2025-12-21 15:44:35 +01:00
f17206b2f4 fix: Deploy without sudo mv (write env directly) 2025-12-21 15:38:30 +01:00
85c5c6e39d fix: Make deploy workflow valid YAML + python 2025-12-21 15:36:43 +01:00
09fe679f9b fix: Repair deploy workflow YAML (indent heredoc) 2025-12-21 15:35:20 +01:00
6a0e0c159c ci: Auto deploy via server-side pounce-deploy 2025-12-21 15:33:50 +01:00
faa1d61923 chore: Trigger CI run 2025-12-21 15:24:48 +01:00
d170d6f729 ci: Auto-deploy on push via SSH
- Gitea Actions workflow now syncs repo to server, builds images, restarts containers, and runs health checks
- Removed all hardcoded secrets from scripts/deploy.sh
- Added CI/CD documentation and ignored .env.deploy

NOTE: Existing secrets previously committed must be rotated.
2025-12-21 15:23:04 +01:00
13334f6cdd fix: Simplify CI pipeline, use local deploy script 2025-12-21 15:14:42 +01:00
436e3743ed feat: Add local deployment script
- Created scripts/deploy.sh for reliable local deployments
- Simplified CI pipeline to code quality checks only
- Deploy via: ./scripts/deploy.sh [backend|frontend]

The Gitea Actions runner cannot access host Docker in Coolify
environment, so deployments must be triggered locally.
2025-12-21 15:12:22 +01:00
86e0057adc refactor: SSH-based deployment pipeline
Changed from Docker-in-Docker to SSH-based deployment:
- Uses rsync to sync code to server
- Builds Docker images on host directly
- More reliable for Coolify environments
- Proper secret management via SSH
2025-12-21 15:07:58 +01:00
380c0313d9 refactor: Simplify CI/CD pipeline for reliability
- Removed REPO_PATH workaround (use checkout directly)
- Simplified env vars with global definitions
- Fixed network names as env vars
- Updated DATABASE_URL in Gitea secrets
- Cleaner deployment steps
- Better health checks
2025-12-21 15:03:43 +01:00
ddb1a26d47 fix: Implement IANA Bootstrap RDAP for reliable domain checking
Major improvements to domain availability checking:

1. IANA Bootstrap (rdap.org) as universal fallback
   - Works for ALL TLDs without rate limiting
   - Automatically redirects to correct registry
   - Faster than direct endpoints for most TLDs

2. Updated drop_status_checker.py
   - Uses IANA Bootstrap with follow_redirects=True
   - Preferred endpoints for .ch/.li/.de (direct, faster)
   - Better rate limiting (300ms delay, 3 concurrent max)

3. Updated domain_checker.py
   - New _check_rdap_iana() method
   - Removed RDAP_BLOCKED_TLDS (not needed with IANA Bootstrap)
   - Simplified check_domain() priority flow

Priority order:
1. Custom RDAP (.ch/.li/.de) - fastest
2. IANA Bootstrap (all other TLDs) - reliable
3. WHOIS - fallback
4. DNS - final validation

This eliminates RDAP timeouts and bans completely.
2025-12-21 14:54:51 +01:00
5f3856fce6 fix: RDAP ban prevention and DNS fallback
Problem: We are banned from Afilias (.info/.biz) and Google (.dev/.app)
RDAP servers due to too many requests, causing timeouts.

Solution:
1. Added RDAP_BLOCKED_TLDS list in domain_checker.py
2. Skip RDAP for blocked TLDs, use DNS+WHOIS instead
3. Updated drop_status_checker.py to skip blocked TLDs
4. Removed banned endpoints from RDAP_ENDPOINTS

TLDs now using DNS-only: .info, .biz, .org, .dev, .app, .xyz, .online, .com, .net
TLDs still using RDAP: .ch, .li, .de (working fine)

This prevents bans and timeouts while still providing availability checks.
2025-12-21 14:39:40 +01:00
84964ccb84 fix: use correct api.request() method in ZonesTab 2025-12-21 13:30:36 +01:00
f9e6025dc4 feat: Premium infrastructure improvements
1. Parallel Zone Downloads (3x faster)
   - CZDS zones now download in parallel with semaphore
   - Configurable max_concurrent (default: 3)
   - Added timing logs for performance monitoring

2. Email Alerts for Ops
   - New send_ops_alert() in email service
   - Automatic alerts on zone sync failures
   - Critical alerts on complete job crashes
   - Severity levels: info, warning, error, critical

3. Admin Zone Sync Dashboard
   - New "Zone Sync" tab in admin panel
   - Real-time status for all TLDs
   - Manual sync trigger buttons
   - Shows drops today, total drops, last sync time
   - Health status indicators (healthy/stale/never)
   - API endpoint: GET /admin/zone-sync/status
2025-12-21 13:25:08 +01:00
3d25d87415 feat: Premium zone sync improvements
1. Parallel Zone Downloads (CZDS):
   - Downloads up to 3 TLDs concurrently
   - Reduced sync time from 3+ min to ~1 min
   - Semaphore prevents ICANN rate limits

2. Email Alerts:
   - Automatic alerts when sync fails
   - Sends to admin email with error details
   - Includes success/error summary

3. Admin Zone Sync Dashboard:
   - New "Zone Sync" tab in admin panel
   - Shows all TLDs with domain counts
   - Manual "Sync Now" buttons for Switch/CZDS
   - Live stats: drops/24h, total domains

4. Backend Improvements:
   - /admin/zone-stats endpoint
   - Fixed zone-sync endpoints with correct imports
2025-12-21 13:07:03 +01:00
6dca12dc5a fix: Add zone volume permissions to deploy pipeline 2025-12-21 12:47:20 +01:00
622aabf384 fix: Add dig to Docker, fix admin sync endpoints
- Added dnsutils (dig) to backend Dockerfile for DNS zone transfers
- Fixed admin zone sync endpoints with correct imports
- AsyncSessionLocal instead of async_session_maker
2025-12-21 12:41:36 +01:00
bbf6afe2f6 feat: Add admin endpoints for manual zone sync trigger 2025-12-21 12:36:32 +01:00
3bdb005efb feat: Consistent domain status across all pages
Backend:
- Added DROPPING_SOON status to DomainStatus enum
- Added deletion_date field to Domain model
- domain_checker now returns DROPPING_SOON for pending delete
- Track endpoint copies status and deletion_date from drop

Frontend:
- Watchlist shows "TRANSITION" status for dropping_soon domains
- AnalyzePanel shows consistent status from Watchlist
- Status display unified between Drops, Watchlist, and Panel
2025-12-21 12:32:53 +01:00
5df7d5cb96 fix: Consistent domain status across pages + refresh-all timezone fix
Backend:
- Fixed datetime timezone error in refresh-all endpoint
- Added _to_naive_utc() helper for PostgreSQL compatibility

Frontend:
- Watchlist now passes domain status to AnalyzePanel
- Status is consistent between Drops, Watchlist, and Sidepanel
- Shows "Available" or "Taken" status in AnalyzePanel from Watchlist
2025-12-20 23:44:53 +01:00
4995101dd1 fix: Frontend proxy uses pounce-backend in production
- next.config.js now detects NODE_ENV=production
- Uses http://pounce-backend:8000 in Docker instead of localhost
- Logs backend URL during build for debugging
2025-12-20 23:39:43 +01:00
c5a9bd83d5 fix: Track endpoint error handling, improve drops UI with tracked state
Backend:
- Fixed track endpoint duplicate key error with proper rollback
- Returns domain_id for already tracked domains

Frontend DropsTab:
- Added trackedDrops state to show "Tracked" status
- Track button shows checkmark when already in watchlist
- Status button shows "In Transition" with countdown

AnalyzePanel:
- Added dropStatus to store for passing drop info
- Shows Drop Status banner with availability
- "Buy Now" button for available domains in panel
2025-12-20 23:29:31 +01:00
fca54a93e7 fix: Rename GITHUB_CLIENT_SECRET to GH_OAUTH_SECRET (reserved name) 2025-12-20 23:09:58 +01:00
85b1be691a fix: Disable RDAP verification to prevent bans, improve drops UI
- Disabled verify_drops scheduler job (caused RDAP rate limit bans)
- Zone files now saved without RDAP verification (zone diff is reliable)
- Added date-based zone file snapshots with 3-day retention
- Improved DropsTab UI with better status display:
  - "In Transition" with countdown timer for dropping_soon
  - "Available Now" with Buy button
  - "Re-registered" for taken domains
  - Track button for dropping_soon domains
- Added --shm-size=8g to backend container for multiprocessing
- Removed duplicate host cron job (scheduler handles everything)
2025-12-20 22:56:25 +01:00
40 changed files with 2418 additions and 859 deletions

View File

@ -1,54 +1,45 @@
name: Deploy Pounce name: Deploy Pounce (Auto)
on: on:
push: push:
branches: branches: [main]
- main
jobs: jobs:
build-and-deploy: deploy:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout code - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Set up environment - name: Install deploy tooling
run: | run: |
echo "REPO_PATH=/home/administrator/pounce" >> $GITHUB_ENV apt-get update
echo "BACKEND_IMAGE=pounce-backend" >> $GITHUB_ENV apt-get install -y --no-install-recommends openssh-client rsync ca-certificates
echo "FRONTEND_IMAGE=pounce-frontend" >> $GITHUB_ENV
- name: Sync code to deploy directory - name: Setup SSH key
run: | run: |
mkdir -p ${{ env.REPO_PATH }} mkdir -p ~/.ssh
cp -r . ${{ env.REPO_PATH }}/ echo "${{ secrets.DEPLOY_SSH_KEY }}" > ~/.ssh/deploy_key
echo "Code synced to ${{ env.REPO_PATH }}" chmod 600 ~/.ssh/deploy_key
ssh-keyscan -H "${{ secrets.DEPLOY_HOST }}" >> ~/.ssh/known_hosts 2>/dev/null
- name: Build Backend Docker Image - name: Sync repository to server
run: | run: |
cd ${{ env.REPO_PATH }}/backend rsync -az --delete \
docker build -t ${{ env.BACKEND_IMAGE }}:${{ github.sha }} -t ${{ env.BACKEND_IMAGE }}:latest . -e "ssh -i ~/.ssh/deploy_key -o StrictHostKeyChecking=yes" \
echo "✅ Backend image built successfully" --exclude ".git" \
--exclude ".venv" \
--exclude "venv" \
--exclude "backend/.venv" \
--exclude "backend/venv" \
--exclude "frontend/node_modules" \
--exclude "frontend/.next" \
--exclude "**/__pycache__" \
--exclude "**/*.pyc" \
./ \
"${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }}:${{ secrets.DEPLOY_PATH }}/"
- name: Build Frontend Docker Image - name: Generate backend env file (from secrets)
run: |
cd ${{ env.REPO_PATH }}/frontend
# Create .env.local with correct URLs
cat > .env.local << EOF
NEXT_PUBLIC_API_URL=https://api.pounce.ch
BACKEND_URL=http://pounce-backend:8000
EOF
docker build \
--build-arg NEXT_PUBLIC_API_URL=https://api.pounce.ch \
--build-arg BACKEND_URL=http://pounce-backend:8000 \
-t ${{ env.FRONTEND_IMAGE }}:${{ github.sha }} \
-t ${{ env.FRONTEND_IMAGE }}:latest \
.
echo "✅ Frontend image built successfully"
- name: Deploy Backend
env: env:
DATABASE_URL: ${{ secrets.DATABASE_URL }} DATABASE_URL: ${{ secrets.DATABASE_URL }}
SECRET_KEY: ${{ secrets.SECRET_KEY }} SECRET_KEY: ${{ secrets.SECRET_KEY }}
@ -56,128 +47,113 @@ jobs:
STRIPE_SECRET_KEY: ${{ secrets.STRIPE_SECRET_KEY }} STRIPE_SECRET_KEY: ${{ secrets.STRIPE_SECRET_KEY }}
STRIPE_WEBHOOK_SECRET: ${{ secrets.STRIPE_WEBHOOK_SECRET }} STRIPE_WEBHOOK_SECRET: ${{ secrets.STRIPE_WEBHOOK_SECRET }}
GOOGLE_CLIENT_SECRET: ${{ secrets.GOOGLE_CLIENT_SECRET }} GOOGLE_CLIENT_SECRET: ${{ secrets.GOOGLE_CLIENT_SECRET }}
GITHUB_CLIENT_SECRET: ${{ secrets.GITHUB_CLIENT_SECRET }} GH_OAUTH_SECRET: ${{ secrets.GH_OAUTH_SECRET }}
CZDS_USERNAME: ${{ secrets.CZDS_USERNAME }}
CZDS_PASSWORD: ${{ secrets.CZDS_PASSWORD }}
SWITCH_TSIG_CH_SECRET: ${{ secrets.SWITCH_TSIG_CH_SECRET }}
SWITCH_TSIG_LI_SECRET: ${{ secrets.SWITCH_TSIG_LI_SECRET }}
LLM_GATEWAY_URL: ${{ secrets.LLM_GATEWAY_URL }}
LLM_GATEWAY_API_KEY: ${{ secrets.LLM_GATEWAY_API_KEY }}
run: | run: |
# Stop existing container python3 - <<'PY'
docker stop pounce-backend 2>/dev/null || true import os
docker rm pounce-backend 2>/dev/null || true from pathlib import Path
# Ensure persistent directories exist
sudo mkdir -p /data/pounce/zones/czds /data/pounce/zones/switch /data/pounce/logs
sudo chmod -R 755 /data/pounce
# Run new container with secrets from environment
docker run -d \
--name pounce-backend \
--network n0488s44osgoow4wgo04ogg0 \
--restart unless-stopped \
-v /data/pounce/zones/czds:/data/czds \
-v /data/pounce/zones/switch:/data/switch \
-v /data/pounce/logs:/data/logs \
-e CZDS_DATA_DIR="/data/czds" \
-e SWITCH_DATA_DIR="/data/switch" \
-e ZONE_RETENTION_DAYS="3" \
-e DATABASE_URL="${DATABASE_URL}" \
-e SECRET_KEY="${SECRET_KEY}" \
-e JWT_SECRET="${SECRET_KEY}" \
-e REDIS_URL="redis://pounce-redis:6379/0" \
-e ENABLE_JOB_QUEUE="true" \
-e CORS_ORIGINS="https://pounce.ch,https://www.pounce.ch" \
-e COOKIE_SECURE="true" \
-e SITE_URL="https://pounce.ch" \
-e FRONTEND_URL="https://pounce.ch" \
-e ENVIRONMENT="production" \
-e ENABLE_SCHEDULER="true" \
-e SMTP_HOST="smtp.zoho.eu" \
-e SMTP_PORT="465" \
-e SMTP_USER="hello@pounce.ch" \
-e SMTP_PASSWORD="${SMTP_PASSWORD}" \
-e SMTP_FROM_EMAIL="hello@pounce.ch" \
-e SMTP_FROM_NAME="pounce" \
-e SMTP_USE_TLS="false" \
-e SMTP_USE_SSL="true" \
-e STRIPE_SECRET_KEY="${STRIPE_SECRET_KEY}" \
-e STRIPE_PUBLISHABLE_KEY="pk_live_51ScLbjCtFUamNRpNeFugrlTIYhszbo8GovSGiMnPwHpZX9p3SGtgG8iRHYRIlAtg9M9sl3mvT5r8pwXP3mOsPALG00Wk3j0wH4" \
-e STRIPE_PRICE_TRADER="price_1ScRlzCtFUamNRpNQdMpMzxV" \
-e STRIPE_PRICE_TYCOON="price_1SdwhSCtFUamNRpNEXTSuGUc" \
-e STRIPE_WEBHOOK_SECRET="${STRIPE_WEBHOOK_SECRET}" \
-e GOOGLE_CLIENT_ID="865146315769-vi7vcu91d3i7huv8ikjun52jo9ob7spk.apps.googleusercontent.com" \
-e GOOGLE_CLIENT_SECRET="${GOOGLE_CLIENT_SECRET}" \
-e GOOGLE_REDIRECT_URI="https://pounce.ch/api/v1/oauth/google/callback" \
-e GITHUB_CLIENT_ID="Ov23liBjROk39vYXi3G5" \
-e GITHUB_CLIENT_SECRET="${GITHUB_CLIENT_SECRET}" \
-e GITHUB_REDIRECT_URI="https://pounce.ch/api/v1/oauth/github/callback" \
-l "traefik.enable=true" \
-l "traefik.http.routers.pounce-api.rule=Host(\`api.pounce.ch\`)" \
-l "traefik.http.routers.pounce-api.entryPoints=https" \
-l "traefik.http.routers.pounce-api.tls=true" \
-l "traefik.http.routers.pounce-api.tls.certresolver=letsencrypt" \
-l "traefik.http.services.pounce-api.loadbalancer.server.port=8000" \
-l "traefik.http.routers.pounce-api-http.rule=Host(\`api.pounce.ch\`)" \
-l "traefik.http.routers.pounce-api-http.entryPoints=http" \
-l "traefik.http.routers.pounce-api-http.middlewares=redirect-to-https" \
${{ env.BACKEND_IMAGE }}:latest
# Connect to coolify network for Traefik
docker network connect coolify pounce-backend 2>/dev/null || true
echo "✅ Backend deployed"
- name: Deploy Frontend env = {
# Core
"ENVIRONMENT": "production",
# Scheduler will run in separate container (pounce-scheduler)
"ENABLE_SCHEDULER": "false",
"DEBUG": "false",
"COOKIE_SECURE": "true",
"CORS_ORIGINS": "https://pounce.ch,https://www.pounce.ch",
"SITE_URL": "https://pounce.ch",
"FRONTEND_URL": "https://pounce.ch",
# Data dirs
"CZDS_DATA_DIR": "/data/czds",
"SWITCH_DATA_DIR": "/data/switch",
"ZONE_RETENTION_DAYS": "3",
# DB/Redis
"DATABASE_URL": os.environ["DATABASE_URL"],
"REDIS_URL": "redis://pounce-redis:6379/0",
# Rate limiting must be shared across workers in production
"RATE_LIMIT_STORAGE_URI": "redis://pounce-redis:6379/2",
# Auth
"SECRET_KEY": os.environ["SECRET_KEY"],
"JWT_SECRET": os.environ["SECRET_KEY"],
# SMTP
"SMTP_HOST": "smtp.zoho.eu",
"SMTP_PORT": "465",
"SMTP_USER": "hello@pounce.ch",
"SMTP_PASSWORD": os.environ["SMTP_PASSWORD"],
"SMTP_FROM_EMAIL": "hello@pounce.ch",
"SMTP_FROM_NAME": "pounce",
"SMTP_USE_TLS": "false",
"SMTP_USE_SSL": "true",
# Stripe
"STRIPE_SECRET_KEY": os.environ["STRIPE_SECRET_KEY"],
"STRIPE_PUBLISHABLE_KEY": "pk_live_51ScLbjCtFUamNRpNeFugrlTIYhszbo8GovSGiMnPwHpZX9p3SGtgG8iRHYRIlAtg9M9sl3mvT5r8pwXP3mOsPALG00Wk3j0wH4",
"STRIPE_PRICE_TRADER": "price_1ScRlzCtFUamNRpNQdMpMzxV",
"STRIPE_PRICE_TYCOON": "price_1SdwhSCtFUamNRpNEXTSuGUc",
"STRIPE_WEBHOOK_SECRET": os.environ["STRIPE_WEBHOOK_SECRET"],
# OAuth
"GOOGLE_CLIENT_ID": "865146315769-vi7vcu91d3i7huv8ikjun52jo9ob7spk.apps.googleusercontent.com",
"GOOGLE_CLIENT_SECRET": os.environ["GOOGLE_CLIENT_SECRET"],
"GOOGLE_REDIRECT_URI": "https://pounce.ch/api/v1/oauth/google/callback",
"GITHUB_CLIENT_ID": "Ov23liBjROk39vYXi3G5",
"GITHUB_CLIENT_SECRET": os.environ["GH_OAUTH_SECRET"],
"GITHUB_REDIRECT_URI": "https://pounce.ch/api/v1/oauth/github/callback",
# CZDS
"CZDS_USERNAME": os.environ["CZDS_USERNAME"],
"CZDS_PASSWORD": os.environ["CZDS_PASSWORD"],
# Switch TSIG (AXFR)
"SWITCH_TSIG_CH_SECRET": os.environ["SWITCH_TSIG_CH_SECRET"],
"SWITCH_TSIG_LI_SECRET": os.environ["SWITCH_TSIG_LI_SECRET"],
# LLM Gateway (Mistral Nemo via Ollama)
"LLM_GATEWAY_URL": os.environ.get("LLM_GATEWAY_URL", ""),
"LLM_GATEWAY_API_KEY": os.environ.get("LLM_GATEWAY_API_KEY", ""),
}
lines = []
for k, v in env.items():
if v is None:
continue
lines.append(f"{k}={v}")
Path("backend.env").write_text("\n".join(lines) + "\n")
PY
- name: Upload backend env to server
run: | run: |
# Stop existing container rsync -az \
docker stop pounce-frontend 2>/dev/null || true -e "ssh -i ~/.ssh/deploy_key -o StrictHostKeyChecking=yes" \
docker rm pounce-frontend 2>/dev/null || true ./backend.env \
"${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }}:/data/pounce/env/backend.env"
# Run new container
docker run -d \
--name pounce-frontend \
--network coolify \
--restart unless-stopped \
-l "traefik.enable=true" \
-l "traefik.http.routers.pounce-web.rule=Host(\`pounce.ch\`) || Host(\`www.pounce.ch\`)" \
-l "traefik.http.routers.pounce-web.entryPoints=https" \
-l "traefik.http.routers.pounce-web.tls=true" \
-l "traefik.http.routers.pounce-web.tls.certresolver=letsencrypt" \
-l "traefik.http.services.pounce-web.loadbalancer.server.port=3000" \
-l "traefik.http.routers.pounce-web-http.rule=Host(\`pounce.ch\`) || Host(\`www.pounce.ch\`)" \
-l "traefik.http.routers.pounce-web-http.entryPoints=http" \
-l "traefik.http.routers.pounce-web-http.middlewares=redirect-to-https" \
${{ env.FRONTEND_IMAGE }}:latest
# Connect to supabase network for backend access
docker network connect n0488s44osgoow4wgo04ogg0 pounce-frontend 2>/dev/null || true
echo "✅ Frontend deployed"
- name: Health Check - name: Deploy on server (pounce-deploy)
run: | run: |
echo "Waiting for services to start..." ssh -i ~/.ssh/deploy_key "${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }}" << 'DEPLOY_EOF'
sleep 15 set -euo pipefail
chmod 600 /data/pounce/env/backend.env
echo "=== Backend Health Check ===" sudo /usr/local/bin/pounce-deploy
curl -sf http://localhost:8000/health || curl -sf http://pounce-backend:8000/health || echo "Backend starting..." DEPLOY_EOF
echo ""
echo "=== Container Status ==="
docker ps --filter "name=pounce" --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
- name: Cleanup - name: Summary
run: |
docker image prune -f
docker container prune -f
echo "✅ Cleanup complete"
- name: Deployment Summary
run: | run: |
echo "==========================================" echo "=========================================="
echo "🎉 DEPLOYMENT SUCCESSFUL!" echo "🎉 AUTO DEPLOY COMPLETED"
echo "==========================================" echo "=========================================="
echo "Commit: ${{ github.sha }}" echo "Commit: ${{ github.sha }}"
echo "Branch: ${{ github.ref_name }}" echo "Backend: https://api.pounce.ch"
echo "Time: $(date)" echo "Web: https://pounce.ch"
echo ""
echo "Services:"
echo " - Frontend: https://pounce.ch"
echo " - Backend: https://api.pounce.ch"
echo "==========================================" echo "=========================================="

1
.gitignore vendored
View File

@ -26,6 +26,7 @@ dist/
.env .env
.env.local .env.local
.env.*.local .env.*.local
.env.deploy
*.log *.log
# Deployment env files (MUST NOT be committed) # Deployment env files (MUST NOT be committed)

View File

@ -318,3 +318,18 @@ Empfehlungen:

View File

@ -12,8 +12,10 @@ RUN groupadd -r pounce && useradd -r -g pounce pounce
WORKDIR /app WORKDIR /app
# Install system dependencies # Install system dependencies
# dnsutils provides 'dig' for DNS zone transfers (AXFR)
RUN apt-get update && apt-get install -y --no-install-recommends \ RUN apt-get update && apt-get install -y --no-install-recommends \
curl \ curl \
dnsutils \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
# Install Python dependencies # Install Python dependencies

View File

@ -662,15 +662,29 @@ async def delete_user(
db: Database, db: Database,
admin: User = Depends(require_admin), admin: User = Depends(require_admin),
): ):
"""Delete a user and all their data.""" """
Delete a user and all their data.
Production-hardening:
- Prefer hard-delete (keeps DB tidy).
- If hard-delete is blocked by FK constraints, fall back to a safe deactivation
(soft-delete) so the admin UI never hits a 500.
"""
from app.models.blog import BlogPost from app.models.blog import BlogPost
from app.models.admin_log import AdminActivityLog from app.models.admin_log import AdminActivityLog
from app.services.auth import AuthService
from sqlalchemy.exc import IntegrityError
import secrets
result = await db.execute(select(User).where(User.id == user_id)) result = await db.execute(select(User).where(User.id == user_id))
user = result.scalar_one_or_none() user = result.scalar_one_or_none()
if not user: if not user:
raise HTTPException(status_code=404, detail="User not found") raise HTTPException(status_code=404, detail="User not found")
# Safety rails
if user.id == admin.id:
raise HTTPException(status_code=400, detail="Cannot delete your own admin account")
if user.is_admin: if user.is_admin:
raise HTTPException(status_code=400, detail="Cannot delete admin user") raise HTTPException(status_code=400, detail="Cannot delete admin user")
@ -687,17 +701,47 @@ async def delete_user(
AdminActivityLog.__table__.delete().where(AdminActivityLog.admin_id == user_id) AdminActivityLog.__table__.delete().where(AdminActivityLog.admin_id == user_id)
) )
# Now delete the user (cascades to domains, subscriptions, portfolio, price_alerts) # Now delete the user (cascades to domains, subscriptions, portfolio, listings, alerts, etc.)
await db.delete(user) # If FK constraints block the delete (e.g., some rows reference users.id without cascade),
await db.commit() # we fall back to a safe soft-delete.
try:
await db.delete(user)
await db.commit()
deleted_mode = "hard"
except IntegrityError:
await db.rollback()
# Soft delete: disable account + remove auth factors so the user can never log in again.
# (We keep the row to satisfy FK constraints elsewhere.)
user.is_active = False
user.is_verified = False
user.hashed_password = AuthService.hash_password(secrets.token_urlsafe(32))
user.stripe_customer_id = None
user.password_reset_token = None
user.password_reset_expires = None
user.email_verification_token = None
user.email_verification_expires = None
user.oauth_provider = None
user.oauth_id = None
user.oauth_avatar = None
user.last_login = None
await db.commit()
deleted_mode = "soft"
# Log this action # Log this action
await log_admin_activity( await log_admin_activity(
db, admin.id, "user_delete", db, admin.id, "user_delete",
f"Deleted user {user_email} and all their data" f"Deleted user {user_email} (mode={deleted_mode})"
) )
return {"message": f"User {user_email} and all their data have been deleted"} if deleted_mode == "hard":
return {"message": f"User {user_email} and all their data have been deleted"}
return {
"message": f"User {user_email} has been deactivated (soft delete) due to existing references",
"mode": "soft",
}
@router.post("/users/{user_id}/upgrade") @router.post("/users/{user_id}/upgrade")
@ -1726,3 +1770,142 @@ async def force_activate_listing(
"slug": listing.slug, "slug": listing.slug,
"public_url": listing.public_url, "public_url": listing.public_url,
} }
# ============== Zone File Sync ==============
@router.post("/zone-sync/switch")
async def trigger_switch_sync(
background_tasks: BackgroundTasks,
db: Database,
admin: User = Depends(require_admin),
):
"""
Trigger manual Switch.ch zone file sync (.ch, .li).
Admin only.
"""
from app.services.zone_file import ZoneFileService
async def run_sync():
from app.database import AsyncSessionLocal
async with AsyncSessionLocal() as session:
zf = ZoneFileService()
for tld in ["ch", "li"]:
await zf.run_daily_sync(session, tld)
return {"status": "complete"}
background_tasks.add_task(run_sync)
return {
"status": "started",
"message": "Switch.ch zone sync started in background. Check logs for progress.",
}
@router.post("/zone-sync/czds")
async def trigger_czds_sync(
background_tasks: BackgroundTasks,
db: Database,
admin: User = Depends(require_admin),
):
"""
Trigger manual ICANN CZDS zone file sync (gTLDs).
Admin only.
"""
from app.services.czds_client import CZDSClient
async def run_sync():
from app.database import AsyncSessionLocal
async with AsyncSessionLocal() as session:
client = CZDSClient()
result = await client.sync_all_zones(session, parallel=True)
return result
background_tasks.add_task(run_sync)
return {
"status": "started",
"message": "ICANN CZDS zone sync started in background (parallel mode). Check logs for progress.",
}
@router.get("/zone-sync/status")
async def get_zone_sync_status(
db: Database,
admin: User = Depends(require_admin),
):
"""
Get zone sync status and statistics.
Admin only.
"""
from app.models.zone_file import ZoneSnapshot, DroppedDomain
from sqlalchemy import func, desc
from datetime import timedelta
now = datetime.utcnow()
today = now.replace(hour=0, minute=0, second=0, microsecond=0)
yesterday = today - timedelta(days=1)
# Get latest snapshots per TLD
snapshots_stmt = (
select(
ZoneSnapshot.tld,
func.max(ZoneSnapshot.created_at).label("last_sync"),
func.max(ZoneSnapshot.domain_count).label("domain_count"),
)
.group_by(ZoneSnapshot.tld)
)
result = await db.execute(snapshots_stmt)
snapshots = {row.tld: {"last_sync": row.last_sync, "domain_count": row.domain_count} for row in result.all()}
# Get drops count per TLD for today
drops_today_stmt = (
select(
DroppedDomain.tld,
func.count(DroppedDomain.id).label("count"),
)
.where(DroppedDomain.dropped_date >= today)
.group_by(DroppedDomain.tld)
)
result = await db.execute(drops_today_stmt)
drops_today = {row.tld: row.count for row in result.all()}
# Total drops per TLD
total_drops_stmt = (
select(
DroppedDomain.tld,
func.count(DroppedDomain.id).label("count"),
)
.group_by(DroppedDomain.tld)
)
result = await db.execute(total_drops_stmt)
total_drops = {row.tld: row.count for row in result.all()}
# Build status for each TLD
all_tlds = set(snapshots.keys()) | set(drops_today.keys()) | set(total_drops.keys())
zones = []
for tld in sorted(all_tlds):
snapshot = snapshots.get(tld, {})
last_sync = snapshot.get("last_sync")
zones.append({
"tld": tld,
"last_sync": last_sync.isoformat() if last_sync else None,
"domain_count": snapshot.get("domain_count", 0),
"drops_today": drops_today.get(tld, 0),
"total_drops": total_drops.get(tld, 0),
"status": "healthy" if last_sync and last_sync > yesterday else "stale" if last_sync else "never",
})
return {
"zones": zones,
"summary": {
"total_zones": len(zones),
"healthy": sum(1 for z in zones if z["status"] == "healthy"),
"stale": sum(1 for z in zones if z["status"] == "stale"),
"never_synced": sum(1 for z in zones if z["status"] == "never"),
"total_drops_today": sum(drops_today.values()),
"total_drops_all": sum(total_drops.values()),
}
}

View File

@ -30,13 +30,14 @@ async def check_domain_availability(request: DomainCheckRequest):
return DomainCheckResponse( return DomainCheckResponse(
domain=result.domain, domain=result.domain,
status=result.status.value, status=result.status,
is_available=result.is_available, is_available=result.is_available,
registrar=result.registrar, registrar=result.registrar,
expiration_date=result.expiration_date, expiration_date=result.expiration_date,
creation_date=result.creation_date, creation_date=result.creation_date,
name_servers=result.name_servers, name_servers=result.name_servers,
error_message=result.error_message, error_message=result.error_message,
status_source=getattr(result, "check_method", None),
checked_at=datetime.utcnow(), checked_at=datetime.utcnow(),
) )
@ -61,13 +62,14 @@ async def check_domain_get(domain: str, quick: bool = False):
return DomainCheckResponse( return DomainCheckResponse(
domain=result.domain, domain=result.domain,
status=result.status.value, status=result.status,
is_available=result.is_available, is_available=result.is_available,
registrar=result.registrar, registrar=result.registrar,
expiration_date=result.expiration_date, expiration_date=result.expiration_date,
creation_date=result.creation_date, creation_date=result.creation_date,
name_servers=result.name_servers, name_servers=result.name_servers,
error_message=result.error_message, error_message=result.error_message,
status_source=getattr(result, "check_method", None),
checked_at=datetime.utcnow(), checked_at=datetime.utcnow(),
) )

View File

@ -13,9 +13,11 @@ from app.models.subscription import TIER_CONFIG, SubscriptionTier
from app.schemas.domain import DomainCreate, DomainResponse, DomainListResponse from app.schemas.domain import DomainCreate, DomainResponse, DomainListResponse
from app.services.domain_checker import domain_checker from app.services.domain_checker import domain_checker
from app.services.domain_health import get_health_checker, HealthStatus from app.services.domain_health import get_health_checker, HealthStatus
from app.utils.datetime import to_naive_utc
router = APIRouter() router = APIRouter()
def _safe_json_loads(value: str | None, default): def _safe_json_loads(value: str | None, default):
if not value: if not value:
return default return default
@ -165,6 +167,7 @@ async def add_domain(
expiration_date=check_result.expiration_date, expiration_date=check_result.expiration_date,
notify_on_available=domain_data.notify_on_available, notify_on_available=domain_data.notify_on_available,
last_checked=datetime.utcnow(), last_checked=datetime.utcnow(),
last_check_method=check_result.check_method,
) )
db.add(domain) db.add(domain)
await db.flush() await db.flush()
@ -265,8 +268,9 @@ async def refresh_domain(
domain.status = check_result.status domain.status = check_result.status
domain.is_available = check_result.is_available domain.is_available = check_result.is_available
domain.registrar = check_result.registrar domain.registrar = check_result.registrar
domain.expiration_date = check_result.expiration_date domain.expiration_date = to_naive_utc(check_result.expiration_date)
domain.last_checked = datetime.utcnow() domain.last_checked = datetime.utcnow()
domain.last_check_method = check_result.check_method
# Create check record # Create check record
check = DomainCheck( check = DomainCheck(
@ -342,8 +346,9 @@ async def refresh_all_domains(
domain.status = check_result.status domain.status = check_result.status
domain.is_available = check_result.is_available domain.is_available = check_result.is_available
domain.registrar = check_result.registrar domain.registrar = check_result.registrar
domain.expiration_date = check_result.expiration_date domain.expiration_date = to_naive_utc(check_result.expiration_date)
domain.last_checked = datetime.utcnow() domain.last_checked = datetime.utcnow()
domain.last_check_method = check_result.check_method
# Create check record # Create check record
check = DomainCheck( check = DomainCheck(

View File

@ -17,6 +17,7 @@ from sqlalchemy import select, update
from app.database import get_db from app.database import get_db
from app.api.deps import get_current_user from app.api.deps import get_current_user
from app.models.zone_file import DroppedDomain from app.models.zone_file import DroppedDomain
from app.utils.datetime import to_iso_utc, to_naive_utc
from app.services.zone_file import ( from app.services.zone_file import (
ZoneFileService, ZoneFileService,
get_dropped_domains, get_dropped_domains,
@ -213,6 +214,8 @@ async def api_check_drop_status(
try: try:
# Check with dedicated drop status checker # Check with dedicated drop status checker
status_result = await check_drop_status(full_domain) status_result = await check_drop_status(full_domain)
persisted_deletion_date = to_naive_utc(status_result.deletion_date)
# Update the drop in DB # Update the drop in DB
await db.execute( await db.execute(
@ -221,7 +224,9 @@ async def api_check_drop_status(
.values( .values(
availability_status=status_result.status, availability_status=status_result.status,
rdap_status=str(status_result.rdap_status) if status_result.rdap_status else None, rdap_status=str(status_result.rdap_status) if status_result.rdap_status else None,
last_status_check=datetime.utcnow() last_status_check=datetime.utcnow(),
deletion_date=persisted_deletion_date,
last_check_method=status_result.check_method,
) )
) )
await db.commit() await db.commit()
@ -234,7 +239,9 @@ async def api_check_drop_status(
"can_register_now": status_result.can_register_now, "can_register_now": status_result.can_register_now,
"should_track": status_result.should_monitor, "should_track": status_result.should_monitor,
"message": status_result.message, "message": status_result.message,
"deletion_date": status_result.deletion_date.isoformat() if status_result.deletion_date else None, "deletion_date": to_iso_utc(persisted_deletion_date),
"status_checked_at": to_iso_utc(datetime.utcnow()),
"status_source": status_result.check_method,
} }
except Exception as e: except Exception as e:
@ -274,22 +281,61 @@ async def api_track_drop(
Domain.name == full_domain Domain.name == full_domain
) )
) )
if existing.scalar_one_or_none(): existing_domain = existing.scalar_one_or_none()
return {"status": "already_tracking", "domain": full_domain} if existing_domain:
return {
"status": "already_tracking",
"domain": full_domain,
"message": f"{full_domain} is already in your Watchlist",
"domain_id": existing_domain.id
}
# Add to watchlist with notification enabled try:
domain = Domain( # Map drop status to Domain status
user_id=current_user.id, status_map = {
name=full_domain, 'available': DomainStatus.AVAILABLE,
status=DomainStatus.AVAILABLE if drop.availability_status == 'available' else DomainStatus.UNKNOWN, 'dropping_soon': DomainStatus.DROPPING_SOON,
is_available=drop.availability_status == 'available', 'taken': DomainStatus.TAKEN,
notify_on_available=True, # Enable notification! 'unknown': DomainStatus.UNKNOWN,
) }
db.add(domain) domain_status = status_map.get(drop.availability_status, DomainStatus.UNKNOWN)
await db.commit()
# Add to watchlist with notification enabled
return { domain = Domain(
"status": "tracking", user_id=current_user.id,
"domain": full_domain, name=full_domain,
"message": f"Added {full_domain} to your Watchlist. You'll be notified when available!" status=domain_status,
} is_available=drop.availability_status == 'available',
deletion_date=to_naive_utc(drop.deletion_date), # Copy deletion date for countdown
notify_on_available=True, # Enable notification!
last_checked=datetime.utcnow(),
last_check_method="zone_drop",
)
db.add(domain)
await db.commit()
await db.refresh(domain)
return {
"status": "tracking",
"domain": full_domain,
"message": f"Added {full_domain} to your Watchlist. You'll be notified when available!",
"domain_id": domain.id
}
except Exception as e:
await db.rollback()
# If duplicate key error, try to find existing
existing = await db.execute(
select(Domain).where(
Domain.user_id == current_user.id,
Domain.name == full_domain
)
)
existing_domain = existing.scalar_one_or_none()
if existing_domain:
return {
"status": "already_tracking",
"domain": full_domain,
"message": f"{full_domain} is already in your Watchlist",
"domain_id": existing_domain.id
}
raise HTTPException(status_code=500, detail=str(e))

View File

@ -133,9 +133,23 @@ class Settings(BaseSettings):
# Switch.ch Zone Files (.ch, .li) # Switch.ch Zone Files (.ch, .li)
switch_data_dir: str = "/data/switch" # Persistent storage switch_data_dir: str = "/data/switch" # Persistent storage
# Switch.ch TSIG (DNS AXFR) credentials
# These should be provided via environment variables in production.
switch_tsig_ch_name: str = "tsig-zonedata-ch-public-21-01"
switch_tsig_ch_algorithm: str = "hmac-sha512"
switch_tsig_ch_secret: str = ""
switch_tsig_li_name: str = "tsig-zonedata-li-public-21-01"
switch_tsig_li_algorithm: str = "hmac-sha512"
switch_tsig_li_secret: str = ""
# Zone File Retention (days to keep historical snapshots) # Zone File Retention (days to keep historical snapshots)
zone_retention_days: int = 3 zone_retention_days: int = 3
# Domain check scheduler tuning (external I/O heavy; keep conservative defaults)
domain_check_max_concurrent: int = 3
domain_check_delay_seconds: float = 0.3
class Config: class Config:
env_file = ".env" env_file = ".env"

View File

@ -105,6 +105,75 @@ async def apply_migrations(conn: AsyncConnection) -> None:
) )
) )
# ---------------------------------------------------------
# 2b) domains indexes (watchlist list/sort/filter)
# ---------------------------------------------------------
if await _table_exists(conn, "domains"):
dt_type = "DATETIME" if dialect == "sqlite" else "TIMESTAMP"
# Canonical status metadata (optional)
if not await _has_column(conn, "domains", "last_check_method"):
logger.info("DB migrations: adding column domains.last_check_method")
await conn.execute(text("ALTER TABLE domains ADD COLUMN last_check_method VARCHAR(30)"))
if not await _has_column(conn, "domains", "deletion_date"):
logger.info("DB migrations: adding column domains.deletion_date")
await conn.execute(text(f"ALTER TABLE domains ADD COLUMN deletion_date {dt_type}"))
await conn.execute(text("CREATE INDEX IF NOT EXISTS ix_domains_user_id ON domains(user_id)"))
await conn.execute(text("CREATE INDEX IF NOT EXISTS ix_domains_status ON domains(status)"))
await conn.execute(text("CREATE INDEX IF NOT EXISTS ix_domains_user_created_at ON domains(user_id, created_at)"))
# ---------------------------------------------------------
# 2c) zone_snapshots indexes (admin zone status + recency)
# ---------------------------------------------------------
if await _table_exists(conn, "zone_snapshots"):
await conn.execute(text("CREATE INDEX IF NOT EXISTS ix_zone_snapshots_tld ON zone_snapshots(tld)"))
await conn.execute(text("CREATE INDEX IF NOT EXISTS ix_zone_snapshots_snapshot_date ON zone_snapshots(snapshot_date)"))
await conn.execute(
text(
"CREATE INDEX IF NOT EXISTS ix_zone_snapshots_tld_snapshot_date "
"ON zone_snapshots(tld, snapshot_date)"
)
)
# ---------------------------------------------------------
# 2d) dropped_domains indexes + de-duplication
# ---------------------------------------------------------
if await _table_exists(conn, "dropped_domains"):
dt_type = "DATETIME" if dialect == "sqlite" else "TIMESTAMP"
if not await _has_column(conn, "dropped_domains", "last_check_method"):
logger.info("DB migrations: adding column dropped_domains.last_check_method")
await conn.execute(text("ALTER TABLE dropped_domains ADD COLUMN last_check_method VARCHAR(30)"))
if not await _has_column(conn, "dropped_domains", "deletion_date"):
logger.info("DB migrations: adding column dropped_domains.deletion_date")
await conn.execute(text(f"ALTER TABLE dropped_domains ADD COLUMN deletion_date {dt_type}"))
# Query patterns:
# - by time window (dropped_date) + optional tld + keyword
# - status updates (availability_status + last_status_check)
await conn.execute(text("CREATE INDEX IF NOT EXISTS ix_dropped_domains_tld ON dropped_domains(tld)"))
await conn.execute(text("CREATE INDEX IF NOT EXISTS ix_dropped_domains_dropped_date ON dropped_domains(dropped_date)"))
await conn.execute(
text(
"CREATE INDEX IF NOT EXISTS ix_dropped_domains_tld_dropped_date "
"ON dropped_domains(tld, dropped_date)"
)
)
await conn.execute(text("CREATE INDEX IF NOT EXISTS ix_dropped_domains_domain ON dropped_domains(domain)"))
await conn.execute(text("CREATE INDEX IF NOT EXISTS ix_dropped_domains_availability ON dropped_domains(availability_status)"))
await conn.execute(text("CREATE INDEX IF NOT EXISTS ix_dropped_domains_last_status_check ON dropped_domains(last_status_check)"))
# Enforce de-duplication per drop day (safe + idempotent).
# SQLite: Unique indexes are supported.
# Postgres: Unique indexes are supported; we avoid CONCURRENTLY here (runs in startup transaction).
await conn.execute(
text(
"CREATE UNIQUE INDEX IF NOT EXISTS ux_dropped_domains_domain_tld_dropped_date "
"ON dropped_domains(domain, tld, dropped_date)"
)
)
# --------------------------------------------------- # ---------------------------------------------------
# 3) tld_prices composite index for trend computations # 3) tld_prices composite index for trend computations
# --------------------------------------------------- # ---------------------------------------------------

View File

@ -19,6 +19,7 @@ from app.config import get_settings
from app.database import init_db from app.database import init_db
from app.scheduler import start_scheduler, stop_scheduler from app.scheduler import start_scheduler, stop_scheduler
from app.observability.metrics import instrument_app from app.observability.metrics import instrument_app
from app.services.http_client_pool import close_rdap_http_client
# Configure logging # Configure logging
logging.basicConfig( logging.basicConfig(
@ -59,6 +60,7 @@ async def lifespan(app: FastAPI):
# Shutdown # Shutdown
if settings.enable_scheduler: if settings.enable_scheduler:
stop_scheduler() stop_scheduler()
await close_rdap_http_client()
logger.info("Application shutdown complete") logger.info("Application shutdown complete")

View File

@ -11,6 +11,7 @@ class DomainStatus(str, Enum):
"""Domain availability status.""" """Domain availability status."""
AVAILABLE = "available" AVAILABLE = "available"
TAKEN = "taken" TAKEN = "taken"
DROPPING_SOON = "dropping_soon" # In transition/pending delete
ERROR = "error" ERROR = "error"
UNKNOWN = "unknown" UNKNOWN = "unknown"
@ -32,6 +33,7 @@ class Domain(Base):
# WHOIS data (optional) # WHOIS data (optional)
registrar: Mapped[str | None] = mapped_column(String(255), nullable=True) registrar: Mapped[str | None] = mapped_column(String(255), nullable=True)
expiration_date: Mapped[datetime | None] = mapped_column(DateTime, nullable=True) expiration_date: Mapped[datetime | None] = mapped_column(DateTime, nullable=True)
deletion_date: Mapped[datetime | None] = mapped_column(DateTime, nullable=True) # When domain will be fully deleted
# User relationship # User relationship
user_id: Mapped[int] = mapped_column(ForeignKey("users.id"), nullable=False) user_id: Mapped[int] = mapped_column(ForeignKey("users.id"), nullable=False)
@ -40,6 +42,8 @@ class Domain(Base):
# Timestamps # Timestamps
created_at: Mapped[datetime] = mapped_column(DateTime, default=datetime.utcnow) created_at: Mapped[datetime] = mapped_column(DateTime, default=datetime.utcnow)
last_checked: Mapped[datetime | None] = mapped_column(DateTime, nullable=True) last_checked: Mapped[datetime | None] = mapped_column(DateTime, nullable=True)
# How the current status was derived (rdap_iana, whois, dns, etc.)
last_check_method: Mapped[str | None] = mapped_column(String(30), nullable=True)
# Check history relationship # Check history relationship
checks: Mapped[list["DomainCheck"]] = relationship( checks: Mapped[list["DomainCheck"]] = relationship(
@ -52,6 +56,17 @@ class Domain(Base):
def __repr__(self) -> str: def __repr__(self) -> str:
return f"<Domain {self.name} ({self.status})>" return f"<Domain {self.name} ({self.status})>"
# ------------------------------------------------------------------
# Canonical status fields (API stability for Terminal consistency)
# ------------------------------------------------------------------
@property
def status_checked_at(self) -> datetime | None:
return self.last_checked
@property
def status_source(self) -> str | None:
return self.last_check_method
class DomainCheck(Base): class DomainCheck(Base):
"""History of domain availability checks.""" """History of domain availability checks."""

View File

@ -64,7 +64,10 @@ class User(Base):
"PortfolioDomain", back_populates="user", cascade="all, delete-orphan" "PortfolioDomain", back_populates="user", cascade="all, delete-orphan"
) )
price_alerts: Mapped[List["PriceAlert"]] = relationship( price_alerts: Mapped[List["PriceAlert"]] = relationship(
"PriceAlert", cascade="all, delete-orphan", passive_deletes=True # NOTE:
# We do not rely on DB-level ON DELETE CASCADE for this FK (it is not declared in the model),
# so we must not set passive_deletes=True. Otherwise deleting a user can fail with FK violations.
"PriceAlert", cascade="all, delete-orphan"
) )
# For Sale Marketplace # For Sale Marketplace
listings: Mapped[List["DomainListing"]] = relationship( listings: Mapped[List["DomainListing"]] = relationship(

View File

@ -43,6 +43,7 @@ class DroppedDomain(Base):
rdap_status = Column(String(255), nullable=True) # Raw RDAP status string rdap_status = Column(String(255), nullable=True) # Raw RDAP status string
last_status_check = Column(DateTime, nullable=True) last_status_check = Column(DateTime, nullable=True)
deletion_date = Column(DateTime, nullable=True) # When domain will be fully deleted deletion_date = Column(DateTime, nullable=True) # When domain will be fully deleted
last_check_method = Column(String(30), nullable=True) # rdap_iana, rdap_ch, error, etc.
__table_args__ = ( __table_args__ = (
Index('ix_dropped_domains_tld_date', 'tld', 'dropped_date'), Index('ix_dropped_domains_tld_date', 'tld', 'dropped_date'),

View File

@ -88,7 +88,6 @@ async def check_domains_by_frequency(frequency: str):
tiers_for_frequency.append(tier) tiers_for_frequency.append(tier)
# Get domains from users with matching subscription tier # Get domains from users with matching subscription tier
from sqlalchemy.orm import joinedload
result = await db.execute( result = await db.execute(
select(Domain) select(Domain)
.join(User, Domain.user_id == User.id) .join(User, Domain.user_id == User.id)
@ -99,56 +98,80 @@ async def check_domains_by_frequency(frequency: str):
) )
) )
domains = result.scalars().all() domains = result.scalars().all()
logger.info(f"Checking {len(domains)} domains...") logger.info(f"Checking {len(domains)} domains...")
checked = 0 checked = 0
errors = 0 errors = 0
newly_available = [] newly_available = []
newly_taken = [] # Track domains that became taken newly_taken = [] # Track domains that became taken
status_changes = [] # All status changes for logging status_changes = [] # All status changes for logging
for domain in domains: # Concurrency control + polite pacing (prevents RDAP/WHOIS bans)
try: max_concurrent = max(1, int(getattr(settings, "domain_check_max_concurrent", 3) or 3))
# Check domain availability delay = float(getattr(settings, "domain_check_delay_seconds", 0.3) or 0.3)
check_result = await domain_checker.check_domain(domain.name) semaphore = asyncio.Semaphore(max_concurrent)
async def _check_one(d: Domain) -> tuple[Domain, object | None, Exception | None]:
async with semaphore:
try:
res = await domain_checker.check_domain(d.name)
# small delay after each external request
await asyncio.sleep(delay)
return d, res, None
except Exception as e:
return d, None, e
# Process in chunks to avoid huge gather lists
chunk_size = 200
for i in range(0, len(domains), chunk_size):
chunk = domains[i : i + chunk_size]
results = await asyncio.gather(*[_check_one(d) for d in chunk])
for domain, check_result, err in results:
if err is not None or check_result is None:
logger.error(f"Error checking domain {domain.name}: {err}")
errors += 1
continue
# Track status transitions # Track status transitions
was_available = domain.is_available was_available = domain.is_available
is_now_available = check_result.is_available is_now_available = check_result.is_available
# Detect transition: taken -> available (domain dropped!) # Detect transition: taken -> available (domain dropped!)
if not was_available and is_now_available: if not was_available and is_now_available:
status_changes.append({ status_changes.append(
'domain': domain.name, {
'change': 'became_available', "domain": domain.name,
'old_registrar': domain.registrar, "change": "became_available",
}) "old_registrar": domain.registrar,
}
)
if domain.notify_on_available: if domain.notify_on_available:
newly_available.append(domain) newly_available.append(domain)
logger.info(f"🎯 Domain AVAILABLE: {domain.name} (was registered by {domain.registrar})") logger.info(f"🎯 Domain AVAILABLE: {domain.name} (was registered by {domain.registrar})")
# Detect transition: available -> taken (someone registered it!) # Detect transition: available -> taken (someone registered it!)
elif was_available and not is_now_available: elif was_available and not is_now_available:
status_changes.append({ status_changes.append(
'domain': domain.name, {
'change': 'became_taken', "domain": domain.name,
'new_registrar': check_result.registrar, "change": "became_taken",
}) "new_registrar": check_result.registrar,
}
)
if domain.notify_on_available: # Notify if alerts are on if domain.notify_on_available: # Notify if alerts are on
newly_taken.append({ newly_taken.append({"domain": domain, "registrar": check_result.registrar})
'domain': domain,
'registrar': check_result.registrar,
})
logger.info(f"⚠️ Domain TAKEN: {domain.name} (now registered by {check_result.registrar})") logger.info(f"⚠️ Domain TAKEN: {domain.name} (now registered by {check_result.registrar})")
# Update domain with fresh data # Update domain with fresh data
domain.status = check_result.status domain.status = check_result.status
domain.is_available = check_result.is_available domain.is_available = check_result.is_available
domain.registrar = check_result.registrar domain.registrar = check_result.registrar
domain.expiration_date = check_result.expiration_date domain.expiration_date = check_result.expiration_date
domain.last_checked = datetime.utcnow() domain.last_checked = datetime.utcnow()
domain.last_check_method = getattr(check_result, "check_method", None)
# Create check record for history # Create check record for history
check = DomainCheck( check = DomainCheck(
domain_id=domain.id, domain_id=domain.id,
@ -158,15 +181,7 @@ async def check_domains_by_frequency(frequency: str):
checked_at=datetime.utcnow(), checked_at=datetime.utcnow(),
) )
db.add(check) db.add(check)
checked += 1 checked += 1
# Small delay to avoid rate limiting
await asyncio.sleep(0.5)
except Exception as e:
logger.error(f"Error checking domain {domain.name}: {e}")
errors += 1
await db.commit() await db.commit()
@ -377,30 +392,51 @@ async def run_health_checks():
try: try:
async with AsyncSessionLocal() as db: async with AsyncSessionLocal() as db:
# Get all watched domains (registered, not available) # Get all watched domains (registered, not available)
result = await db.execute( result = await db.execute(select(Domain).where(Domain.is_available == False))
select(Domain).where(Domain.is_available == False)
)
domains = result.scalars().all() domains = result.scalars().all()
logger.info(f"Running health checks on {len(domains)} domains...") logger.info(f"Running health checks on {len(domains)} domains...")
if not domains:
return
# Prefetch caches to avoid N+1 queries
domain_ids = [d.id for d in domains]
caches_result = await db.execute(
select(DomainHealthCache).where(DomainHealthCache.domain_id.in_(domain_ids))
)
caches = caches_result.scalars().all()
cache_by_domain_id = {c.domain_id: c for c in caches}
health_checker = get_health_checker() health_checker = get_health_checker()
checked = 0 checked = 0
errors = 0 errors = 0
status_changes = [] status_changes = []
for domain in domains: max_concurrent = max(1, int(getattr(settings, "domain_check_max_concurrent", 3) or 3))
try: delay = float(getattr(settings, "domain_check_delay_seconds", 0.3) or 0.3)
# Run health check semaphore = asyncio.Semaphore(max_concurrent)
report = await health_checker.check_domain(domain.name)
async def _check_one(d: Domain):
# Check for status changes (if we have previous data) async with semaphore:
# Get existing cache report = await health_checker.check_domain(d.name)
cache_result = await db.execute( await asyncio.sleep(delay)
select(DomainHealthCache).where(DomainHealthCache.domain_id == domain.id) return d, report
)
existing_cache = cache_result.scalar_one_or_none() chunk_size = 100
for i in range(0, len(domains), chunk_size):
chunk = domains[i : i + chunk_size]
results = await asyncio.gather(*[_check_one(d) for d in chunk], return_exceptions=True)
for item in results:
if isinstance(item, Exception):
errors += 1
continue
domain, report = item
existing_cache = cache_by_domain_id.get(domain.id)
old_status = existing_cache.status if existing_cache else None old_status = existing_cache.status if existing_cache else None
new_status = report.status.value new_status = report.status.value
@ -432,7 +468,6 @@ async def run_health_checks():
existing_cache.ssl_data = ssl_json existing_cache.ssl_data = ssl_json
existing_cache.checked_at = datetime.utcnow() existing_cache.checked_at = datetime.utcnow()
else: else:
# Create new cache entry
new_cache = DomainHealthCache( new_cache = DomainHealthCache(
domain_id=domain.id, domain_id=domain.id,
status=new_status, status=new_status,
@ -444,15 +479,9 @@ async def run_health_checks():
checked_at=datetime.utcnow(), checked_at=datetime.utcnow(),
) )
db.add(new_cache) db.add(new_cache)
cache_by_domain_id[domain.id] = new_cache
checked += 1 checked += 1
# Small delay to avoid overwhelming DNS servers
await asyncio.sleep(0.3)
except Exception as e:
logger.error(f"Health check failed for {domain.name}: {e}")
errors += 1
await db.commit() await db.commit()
@ -726,14 +755,16 @@ def setup_scheduler():
replace_existing=True, replace_existing=True,
) )
# Drops availability verification (every 10 minutes - remove taken domains) # Drops availability verification - DISABLED to prevent RDAP bans
scheduler.add_job( # The domains from zone files are already verified as "dropped" by the zone diff
verify_drops, # We don't need to double-check via RDAP - this causes rate limiting!
CronTrigger(minute='*/10'), # Every 10 minutes # scheduler.add_job(
id="drops_verification", # verify_drops,
name="Drops Availability Check (10-min)", # CronTrigger(hour=12, minute=0), # Once a day at noon if needed
replace_existing=True, # id="drops_verification",
) # name="Drops Availability Check (daily)",
# replace_existing=True,
# )
logger.info( logger.info(
f"Scheduler configured:" f"Scheduler configured:"
@ -743,10 +774,11 @@ def setup_scheduler():
f"\n - TLD price scrape 2x daily at 03:00 & 15:00 UTC" f"\n - TLD price scrape 2x daily at 03:00 & 15:00 UTC"
f"\n - Price change alerts at 04:00 & 16:00 UTC" f"\n - Price change alerts at 04:00 & 16:00 UTC"
f"\n - Auction scrape every 2 hours at :30" f"\n - Auction scrape every 2 hours at :30"
f"\n - Expired auction cleanup every 15 minutes" f"\n - Expired auction cleanup every 5 minutes"
f"\n - Sniper alert matching every 30 minutes" f"\n - Sniper alert matching every 30 minutes"
f"\n - Zone file sync daily at 05:00 UTC" f"\n - Switch.ch zone sync daily at 05:00 UTC (.ch, .li)"
f"\n - Drops availability check every 10 minutes" f"\n - ICANN CZDS zone sync daily at 06:00 UTC (gTLDs)"
f"\n - Zone cleanup hourly at :45"
) )
@ -1034,8 +1066,11 @@ async def verify_drops():
async def sync_zone_files(): async def sync_zone_files():
"""Sync zone files from Switch.ch (.ch, .li) and ICANN CZDS (gTLDs).""" """Sync zone files from Switch.ch (.ch, .li)."""
logger.info("Starting zone file sync...") logger.info("Starting Switch.ch zone file sync...")
results = {"ch": None, "li": None}
errors = []
try: try:
from app.services.zone_file import ZoneFileService from app.services.zone_file import ZoneFileService
@ -1047,14 +1082,41 @@ async def sync_zone_files():
for tld in ["ch", "li"]: for tld in ["ch", "li"]:
try: try:
result = await service.run_daily_sync(db, tld) result = await service.run_daily_sync(db, tld)
logger.info(f".{tld} zone sync: {len(result.get('dropped', []))} dropped, {result.get('new_count', 0)} new") dropped_count = len(result.get('dropped', []))
results[tld] = {"status": "success", "dropped": dropped_count, "new": result.get('new_count', 0)}
logger.info(f".{tld} zone sync: {dropped_count} dropped, {result.get('new_count', 0)} new")
except Exception as e: except Exception as e:
logger.error(f".{tld} zone sync failed: {e}") logger.error(f".{tld} zone sync failed: {e}")
results[tld] = {"status": "error", "error": str(e)}
errors.append(f".{tld}: {e}")
logger.info("Switch.ch zone file sync completed") logger.info("Switch.ch zone file sync completed")
# Send alert if any zones failed
if errors:
from app.services.email_service import email_service
await email_service.send_ops_alert(
alert_type="Zone Sync",
title=f"Switch.ch Sync: {len(errors)} zone(s) failed",
details=f"Results:\n" + "\n".join([
f"- .{tld}: {r.get('status')} ({r.get('dropped', 0)} dropped)" if r else f"- .{tld}: not processed"
for tld, r in results.items()
]) + f"\n\nErrors:\n" + "\n".join(errors),
severity="error",
)
except Exception as e: except Exception as e:
logger.exception(f"Zone file sync failed: {e}") logger.exception(f"Zone file sync failed: {e}")
try:
from app.services.email_service import email_service
await email_service.send_ops_alert(
alert_type="Zone Sync",
title="Switch.ch Sync CRASHED",
details=f"The Switch.ch sync job crashed:\n\n{str(e)}",
severity="critical",
)
except:
pass
async def sync_czds_zones(): async def sync_czds_zones():
@ -1075,15 +1137,43 @@ async def sync_czds_zones():
client = CZDSClient() client = CZDSClient()
async with AsyncSessionLocal() as db: async with AsyncSessionLocal() as db:
results = await client.sync_all_zones(db, APPROVED_TLDS) results = await client.sync_all_zones(db, APPROVED_TLDS, parallel=True)
success_count = sum(1 for r in results if r["status"] == "success") success_count = sum(1 for r in results if r["status"] == "success")
error_count = sum(1 for r in results if r["status"] == "error")
total_dropped = sum(r["dropped_count"] for r in results) total_dropped = sum(r["dropped_count"] for r in results)
logger.info(f"CZDS sync complete: {success_count}/{len(APPROVED_TLDS)} zones, {total_dropped:,} dropped") logger.info(f"CZDS sync complete: {success_count}/{len(APPROVED_TLDS)} zones, {total_dropped:,} dropped")
# Send alert if any zones failed
if error_count > 0:
from app.services.email_service import email_service
error_details = "\n".join([
f"- .{r['tld']}: {r.get('error', 'Unknown error')}"
for r in results if r["status"] == "error"
])
await email_service.send_ops_alert(
alert_type="Zone Sync",
title=f"CZDS Sync: {error_count} zone(s) failed",
details=f"Successful: {success_count}/{len(APPROVED_TLDS)}\n"
f"Dropped domains: {total_dropped:,}\n\n"
f"Failed zones:\n{error_details}",
severity="error" if error_count > 2 else "warning",
)
except Exception as e: except Exception as e:
logger.exception(f"CZDS zone file sync failed: {e}") logger.exception(f"CZDS zone file sync failed: {e}")
# Send critical alert for complete failure
try:
from app.services.email_service import email_service
await email_service.send_ops_alert(
alert_type="Zone Sync",
title="CZDS Sync CRASHED",
details=f"The entire CZDS sync job crashed:\n\n{str(e)}",
severity="critical",
)
except:
pass # Don't fail the error handler
async def match_sniper_alerts(): async def match_sniper_alerts():

View File

@ -39,9 +39,13 @@ class DomainResponse(BaseModel):
is_available: bool is_available: bool
registrar: Optional[str] registrar: Optional[str]
expiration_date: Optional[datetime] expiration_date: Optional[datetime]
deletion_date: Optional[datetime] = None
notify_on_available: bool notify_on_available: bool
created_at: datetime created_at: datetime
last_checked: Optional[datetime] last_checked: Optional[datetime]
# Canonical status metadata (stable across Terminal modules)
status_checked_at: Optional[datetime] = None
status_source: Optional[str] = None
class Config: class Config:
from_attributes = True from_attributes = True
@ -70,13 +74,14 @@ class DomainCheckRequest(BaseModel):
class DomainCheckResponse(BaseModel): class DomainCheckResponse(BaseModel):
"""Schema for domain check response.""" """Schema for domain check response."""
domain: str domain: str
status: str status: DomainStatus
is_available: bool is_available: bool
registrar: Optional[str] = None registrar: Optional[str] = None
expiration_date: Optional[datetime] = None expiration_date: Optional[datetime] = None
creation_date: Optional[datetime] = None creation_date: Optional[datetime] = None
name_servers: Optional[List[str]] = None name_servers: Optional[List[str]] = None
error_message: Optional[str] = None error_message: Optional[str] = None
status_source: Optional[str] = None
checked_at: datetime checked_at: datetime

View File

@ -227,11 +227,43 @@ class CZDSClient:
return None return None
async def save_domains(self, tld: str, domains: set[str]): async def save_domains(self, tld: str, domains: set[str]):
"""Save current domains to cache file.""" """Save current domains to cache file with date-based retention."""
from app.config import get_settings
settings = get_settings()
# Save current file (for next sync comparison)
cache_file = self.data_dir / f"{tld}_domains.txt" cache_file = self.data_dir / f"{tld}_domains.txt"
cache_file.write_text("\n".join(sorted(domains))) cache_file.write_text("\n".join(sorted(domains)))
# Also save dated snapshot for retention
today = datetime.now().strftime("%Y-%m-%d")
dated_file = self.data_dir / f"{tld}_domains_{today}.txt"
if not dated_file.exists():
dated_file.write_text("\n".join(sorted(domains)))
logger.info(f"Saved snapshot: {dated_file.name}")
# Cleanup old snapshots (keep last N days)
retention_days = getattr(settings, 'zone_retention_days', 3)
await self._cleanup_old_snapshots(tld, retention_days)
logger.info(f"Saved {len(domains):,} domains for .{tld}") logger.info(f"Saved {len(domains):,} domains for .{tld}")
async def _cleanup_old_snapshots(self, tld: str, keep_days: int = 3):
"""Remove zone file snapshots older than keep_days."""
import re
from datetime import timedelta
cutoff = datetime.now() - timedelta(days=keep_days)
pattern = re.compile(rf"^{tld}_domains_(\d{{4}}-\d{{2}}-\d{{2}})\.txt$")
for file in self.data_dir.glob(f"{tld}_domains_*.txt"):
match = pattern.match(file.name)
if match:
file_date = datetime.strptime(match.group(1), "%Y-%m-%d")
if file_date < cutoff:
file.unlink()
logger.info(f"Deleted old snapshot: {file.name}")
async def process_drops( async def process_drops(
self, self,
db: AsyncSession, db: AsyncSession,
@ -240,87 +272,66 @@ class CZDSClient:
current: set[str] current: set[str]
) -> list[dict]: ) -> list[dict]:
""" """
Find dropped domains and verify they are ACTUALLY available before storing. Find dropped domains and store them directly.
Zone file drops are often immediately re-registered by drop-catching services, NOTE: We do NOT verify availability here to avoid RDAP rate limits/bans.
so we must verify availability before storing to avoid showing unavailable domains. Verification happens separately in the 'verify_drops' scheduler job
which runs in small batches throughout the day.
""" """
from app.services.domain_checker import domain_checker
dropped = previous - current dropped = previous - current
if not dropped: if not dropped:
logger.info(f"No dropped domains found for .{tld}") logger.info(f"No dropped domains found for .{tld}")
return [] return []
logger.info(f"Found {len(dropped):,} potential drops for .{tld}, verifying availability...") logger.info(f"Found {len(dropped):,} dropped domains for .{tld}, saving to database...")
today = datetime.utcnow().replace(hour=0, minute=0, second=0, microsecond=0) today = datetime.utcnow().replace(hour=0, minute=0, second=0, microsecond=0)
# Filter to valuable domains first (short, no numbers, no hyphens) # Store all drops - availability will be verified separately
valuable_drops = [
name for name in dropped
if len(name) <= 10 and not name.isdigit() and '-' not in name
]
# Also include some longer domains (up to 500 total)
other_drops = [
name for name in dropped
if name not in valuable_drops and len(name) <= 15
][:max(0, 500 - len(valuable_drops))]
candidates = valuable_drops + other_drops
logger.info(f"Checking availability of {len(candidates)} candidates (of {len(dropped):,} total drops)")
# Verify availability and only store truly available domains
dropped_records = [] dropped_records = []
available_count = 0 batch_size = 1000
checked_count = 0 dropped_list = list(dropped)
for i, name in enumerate(candidates): for i in range(0, len(dropped_list), batch_size):
full_domain = f"{name}.{tld}" batch = dropped_list[i:i + batch_size]
try: for name in batch:
# Quick DNS check try:
result = await domain_checker.check_domain(full_domain)
checked_count += 1
if result.is_available:
available_count += 1
record = DroppedDomain( record = DroppedDomain(
domain=full_domain, domain=name, # Just the name, not full domain!
tld=tld, tld=tld,
dropped_date=today, dropped_date=today,
length=len(name), length=len(name),
is_numeric=name.isdigit(), is_numeric=name.isdigit(),
has_hyphen='-' in name has_hyphen='-' in name,
availability_status='unknown' # Will be verified later
) )
db.add(record) db.add(record)
dropped_records.append({ dropped_records.append({
"domain": full_domain, "domain": f"{name}.{tld}",
"length": len(name), "length": len(name),
"is_numeric": name.isdigit(),
"has_hyphen": '-' in name
}) })
except Exception as e:
# Progress log every 100 domains # Duplicate or other error - skip
if (i + 1) % 100 == 0: pass
logger.info(f"Verified {i + 1}/{len(candidates)}: {available_count} available so far")
# Commit batch
# Small delay to avoid rate limiting try:
if i % 20 == 0: await db.commit()
await asyncio.sleep(0.1) except Exception:
await db.rollback()
except Exception as e:
logger.warning(f"Error checking {full_domain}: {e}") if (i + batch_size) % 5000 == 0:
logger.info(f"Saved {min(i + batch_size, len(dropped_list)):,}/{len(dropped_list):,} drops")
await db.commit() # Final commit
try:
await db.commit()
except Exception:
await db.rollback()
logger.info( logger.info(f"CZDS drops for .{tld}: {len(dropped_records):,} saved (verification pending)")
f"CZDS drops for .{tld}: "
f"{checked_count} verified, {available_count} actually available, "
f"{len(dropped_records)} stored"
)
return dropped_records return dropped_records
@ -371,7 +382,9 @@ class CZDSClient:
result["current_count"] = len(current_domains) result["current_count"] = len(current_domains)
# Clean up zone file (can be very large) # Clean up zone file (can be very large)
zone_path.unlink() # Note: Parser may have already deleted the file during cleanup_ram_drive()
if zone_path.exists():
zone_path.unlink()
# Get previous snapshot # Get previous snapshot
previous_domains = await self.get_previous_domains(tld) previous_domains = await self.get_previous_domains(tld)
@ -416,7 +429,9 @@ class CZDSClient:
async def sync_all_zones( async def sync_all_zones(
self, self,
db: AsyncSession, db: AsyncSession,
tlds: Optional[list[str]] = None tlds: Optional[list[str]] = None,
parallel: bool = True,
max_concurrent: int = 3
) -> list[dict]: ) -> list[dict]:
""" """
Sync all approved zone files. Sync all approved zone files.
@ -424,26 +439,32 @@ class CZDSClient:
Args: Args:
db: Database session db: Database session
tlds: Optional list of TLDs to sync. Defaults to APPROVED_TLDS. tlds: Optional list of TLDs to sync. Defaults to APPROVED_TLDS.
parallel: If True, download zones in parallel (faster)
max_concurrent: Max concurrent downloads (to be nice to ICANN)
Returns: Returns:
List of sync results for each TLD. List of sync results for each TLD.
""" """
target_tlds = tlds or APPROVED_TLDS target_tlds = tlds or APPROVED_TLDS
start_time = datetime.utcnow()
# Get available zones with their download URLs # Get available zones with their download URLs
available_zones = await self.get_available_zones() available_zones = await self.get_available_zones()
logger.info(f"Starting CZDS sync for {len(target_tlds)} zones: {target_tlds}") logger.info(f"Starting CZDS sync for {len(target_tlds)} zones: {target_tlds}")
logger.info(f"Available zones: {list(available_zones.keys())}") logger.info(f"Available zones: {list(available_zones.keys())}")
logger.info(f"Mode: {'PARALLEL' if parallel else 'SEQUENTIAL'} (max {max_concurrent} concurrent)")
# Prepare tasks with their download URLs
tasks_to_run = []
unavailable_results = []
results = []
for tld in target_tlds: for tld in target_tlds:
# Get the actual download URL for this TLD
download_url = available_zones.get(tld) download_url = available_zones.get(tld)
if not download_url: if not download_url:
logger.warning(f"No download URL available for .{tld}") logger.warning(f"No download URL available for .{tld}")
results.append({ unavailable_results.append({
"tld": tld, "tld": tld,
"status": "not_available", "status": "not_available",
"current_count": 0, "current_count": 0,
@ -452,20 +473,55 @@ class CZDSClient:
"new_count": 0, "new_count": 0,
"error": f"No access to .{tld} zone" "error": f"No access to .{tld} zone"
}) })
continue else:
tasks_to_run.append((tld, download_url))
results = unavailable_results.copy()
if parallel and len(tasks_to_run) > 1:
# Parallel execution with semaphore for rate limiting
semaphore = asyncio.Semaphore(max_concurrent)
result = await self.sync_zone(db, tld, download_url) async def sync_with_semaphore(tld: str, url: str) -> dict:
results.append(result) async with semaphore:
return await self.sync_zone(db, tld, url)
# Small delay between zones to be nice to ICANN servers # Run all tasks in parallel
await asyncio.sleep(2) parallel_results = await asyncio.gather(
*[sync_with_semaphore(tld, url) for tld, url in tasks_to_run],
return_exceptions=True
)
# Process results
for i, result in enumerate(parallel_results):
tld = tasks_to_run[i][0]
if isinstance(result, Exception):
logger.error(f"Parallel sync failed for .{tld}: {result}")
results.append({
"tld": tld,
"status": "error",
"current_count": 0,
"previous_count": 0,
"dropped_count": 0,
"new_count": 0,
"error": str(result)
})
else:
results.append(result)
else:
# Sequential execution (fallback)
for tld, download_url in tasks_to_run:
result = await self.sync_zone(db, tld, download_url)
results.append(result)
await asyncio.sleep(2)
# Summary # Summary
elapsed = (datetime.utcnow() - start_time).total_seconds()
success_count = sum(1 for r in results if r["status"] == "success") success_count = sum(1 for r in results if r["status"] == "success")
total_dropped = sum(r["dropped_count"] for r in results) total_dropped = sum(r["dropped_count"] for r in results)
logger.info( logger.info(
f"CZDS sync complete: " f"CZDS sync complete in {elapsed:.1f}s: "
f"{success_count}/{len(target_tlds)} zones successful, " f"{success_count}/{len(target_tlds)} zones successful, "
f"{total_dropped:,} total dropped domains" f"{total_dropped:,} total dropped domains"
) )

View File

@ -22,6 +22,7 @@ import whodap
import httpx import httpx
from app.models.domain import DomainStatus from app.models.domain import DomainStatus
from app.services.http_client_pool import get_rdap_http_client
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -73,16 +74,17 @@ class DomainChecker:
'de', 'uk', 'fr', 'nl', 'eu', 'be', 'at', 'us', 'de', 'uk', 'fr', 'nl', 'eu', 'be', 'at', 'us',
} }
# TLDs with custom RDAP endpoints (not in whodap but have their own RDAP servers) # TLDs with preferred direct RDAP endpoints (faster than IANA bootstrap)
# These registries have their own RDAP APIs that we query directly
CUSTOM_RDAP_ENDPOINTS = { CUSTOM_RDAP_ENDPOINTS = {
'ch': 'https://rdap.nic.ch/domain/', # Swiss .ch domains (SWITCH) 'ch': 'https://rdap.nic.ch/domain/', # Swiss .ch domains (SWITCH)
'li': 'https://rdap.nic.ch/domain/', # Liechtenstein .li (same registry) 'li': 'https://rdap.nic.ch/domain/', # Liechtenstein .li (same registry)
'de': 'https://rdap.denic.de/domain/', # German .de domains (DENIC) 'de': 'https://rdap.denic.de/domain/', # German .de domains (DENIC)
} }
# TLDs that only support WHOIS (no RDAP at all) # IANA Bootstrap - works for ALL TLDs (redirects to correct registry)
# Note: .ch and .li removed - they have custom RDAP! IANA_BOOTSTRAP_URL = 'https://rdap.org/domain/'
# TLDs that only support WHOIS (no RDAP at all - very rare)
WHOIS_ONLY_TLDS = { WHOIS_ONLY_TLDS = {
'ru', 'su', 'ua', 'by', 'kz', 'ru', 'su', 'ua', 'by', 'kz',
} }
@ -163,129 +165,116 @@ class DomainChecker:
url = f"{endpoint}{domain}" url = f"{endpoint}{domain}"
try: try:
async with httpx.AsyncClient(timeout=10.0) as client: client = await get_rdap_http_client()
response = await client.get(url, follow_redirects=True) response = await client.get(url, timeout=10.0)
if response.status_code == 404: if response.status_code == 404:
# Domain not found = available # Domain not found = available
return DomainCheckResult(
domain=domain,
status=DomainStatus.AVAILABLE,
is_available=True,
check_method="rdap_custom",
)
if response.status_code == 200:
# Domain exists in registry - but check status for pending delete
data = response.json()
# Check if domain is pending deletion (dropped but not yet purged)
domain_status = data.get("status", [])
pending_delete_statuses = [
"pending delete",
"pendingdelete",
"redemption period",
"redemptionperiod",
"pending purge",
"pendingpurge",
]
is_pending_delete = any(
any(pds in str(s).lower() for pds in pending_delete_statuses)
for s in domain_status
)
if is_pending_delete:
logger.info(
f"{domain} is in transition/pending delete (status: {domain_status})"
)
return DomainCheckResult( return DomainCheckResult(
domain=domain, domain=domain,
status=DomainStatus.AVAILABLE, status=DomainStatus.DROPPING_SOON, # In transition, not yet available
is_available=True, is_available=False, # Not yet registrable
check_method="rdap_custom", check_method="rdap_custom",
raw_data={"rdap_status": domain_status, "note": "pending_delete"},
) )
if response.status_code == 200: # Extract dates from events
# Domain exists in registry - but check status for pending delete expiration_date = None
data = response.json() creation_date = None
updated_date = None
# Check if domain is pending deletion (dropped but not yet purged) registrar = None
# These domains are effectively available for registration name_servers: list[str] = []
domain_status = data.get('status', [])
pending_delete_statuses = [ # Parse events
'pending delete', events = data.get("events", [])
'pendingdelete', for event in events:
'redemption period', action = event.get("eventAction", "").lower()
'redemptionperiod', date_str = event.get("eventDate", "")
'pending purge',
'pendingpurge', if not expiration_date and any(x in action for x in ["expiration", "expire"]):
] expiration_date = self._parse_datetime(date_str)
is_pending_delete = any( if not creation_date and any(x in action for x in ["registration", "created"]):
any(pds in str(s).lower() for pds in pending_delete_statuses) creation_date = self._parse_datetime(date_str)
for s in domain_status
) if any(x in action for x in ["changed", "update", "last changed"]):
updated_date = self._parse_datetime(date_str)
if is_pending_delete:
logger.info(f"{domain} is pending delete (status: {domain_status})") # Parse nameservers
return DomainCheckResult( for ns in data.get("nameservers", []):
domain=domain, if isinstance(ns, dict):
status=DomainStatus.AVAILABLE, ns_name = ns.get("ldhName", "")
is_available=True, if ns_name:
check_method="rdap_custom", name_servers.append(ns_name.lower())
raw_data={"rdap_status": domain_status, "note": "pending_delete"},
) # Parse registrar from entities
for entity in data.get("entities", []):
# Extract dates from events roles = entity.get("roles", [])
expiration_date = None if any(r in roles for r in ["registrar", "technical"]):
creation_date = None vcard = entity.get("vcardArray", [])
updated_date = None if isinstance(vcard, list) and len(vcard) > 1:
registrar = None for item in vcard[1]:
name_servers = [] if isinstance(item, list) and len(item) > 3:
if item[0] in ("fn", "org") and item[3]:
# Parse events - different registries use different event actions registrar = str(item[3])
# SWITCH (.ch/.li): uses "expiration" break
# DENIC (.de): uses "last changed" but no expiration in RDAP (only WHOIS) if not registrar:
events = data.get('events', []) handle = entity.get("handle", "")
for event in events: if handle:
action = event.get('eventAction', '').lower() registrar = handle
date_str = event.get('eventDate', '') if registrar:
break
# Expiration date - check multiple variations
if not expiration_date: # For .de domains: DENIC doesn't expose expiration via RDAP
if any(x in action for x in ['expiration', 'expire']): if tld == "de" and not expiration_date:
expiration_date = self._parse_datetime(date_str) logger.debug(f"No expiration in RDAP for {domain}, will try WHOIS")
# Creation/registration date return DomainCheckResult(
if not creation_date: domain=domain,
if any(x in action for x in ['registration', 'created']): status=DomainStatus.TAKEN,
creation_date = self._parse_datetime(date_str) is_available=False,
registrar=registrar,
# Update date expiration_date=expiration_date,
if any(x in action for x in ['changed', 'update', 'last changed']): creation_date=creation_date,
updated_date = self._parse_datetime(date_str) updated_date=updated_date,
name_servers=name_servers if name_servers else None,
# Parse nameservers check_method="rdap_custom",
nameservers = data.get('nameservers', []) )
for ns in nameservers:
if isinstance(ns, dict): # Other status codes - try fallback
ns_name = ns.get('ldhName', '') logger.warning(f"Custom RDAP returned {response.status_code} for {domain}")
if ns_name: return None
name_servers.append(ns_name.lower())
# Parse registrar from entities - check multiple roles
entities = data.get('entities', [])
for entity in entities:
roles = entity.get('roles', [])
# Look for registrar or technical contact as registrar source
if any(r in roles for r in ['registrar', 'technical']):
# Try vcardArray first
vcard = entity.get('vcardArray', [])
if isinstance(vcard, list) and len(vcard) > 1:
for item in vcard[1]:
if isinstance(item, list) and len(item) > 3:
if item[0] in ('fn', 'org') and item[3]:
registrar = str(item[3])
break
# Try handle as fallback
if not registrar:
handle = entity.get('handle', '')
if handle:
registrar = handle
if registrar:
break
# For .de domains: DENIC doesn't expose expiration via RDAP
# We need to use WHOIS as fallback for expiration date
if tld == 'de' and not expiration_date:
logger.debug(f"No expiration in RDAP for {domain}, will try WHOIS")
# Return what we have, scheduler will update via WHOIS later
return DomainCheckResult(
domain=domain,
status=DomainStatus.TAKEN,
is_available=False,
registrar=registrar,
expiration_date=expiration_date,
creation_date=creation_date,
updated_date=updated_date,
name_servers=name_servers if name_servers else None,
check_method="rdap_custom",
)
# Other status codes - try fallback
logger.warning(f"Custom RDAP returned {response.status_code} for {domain}")
return None
except httpx.TimeoutException: except httpx.TimeoutException:
logger.warning(f"Custom RDAP timeout for {domain}") logger.warning(f"Custom RDAP timeout for {domain}")
@ -294,9 +283,101 @@ class DomainChecker:
logger.warning(f"Custom RDAP error for {domain}: {e}") logger.warning(f"Custom RDAP error for {domain}: {e}")
return None return None
async def _check_rdap_iana(self, domain: str) -> Optional[DomainCheckResult]:
"""
Check domain using IANA Bootstrap RDAP service.
This is the most reliable method as rdap.org automatically
redirects to the correct registry for any TLD.
"""
url = f"{self.IANA_BOOTSTRAP_URL}{domain}"
try:
client = await get_rdap_http_client()
response = await client.get(url, timeout=15.0)
if response.status_code == 404:
return DomainCheckResult(
domain=domain,
status=DomainStatus.AVAILABLE,
is_available=True,
check_method="rdap_iana",
)
if response.status_code == 429:
logger.warning(f"RDAP rate limited for {domain}")
return None
if response.status_code != 200:
return None
data = response.json()
# Parse events for dates
expiration_date = None
creation_date = None
registrar = None
for event in data.get('events', []):
action = event.get('eventAction', '').lower()
date_str = event.get('eventDate', '')
if 'expiration' in action and date_str:
expiration_date = self._parse_datetime(date_str)
elif 'registration' in action and date_str:
creation_date = self._parse_datetime(date_str)
# Extract registrar
for entity in data.get('entities', []):
roles = entity.get('roles', [])
if 'registrar' in roles:
vcard = entity.get('vcardArray', [])
if isinstance(vcard, list) and len(vcard) > 1:
for item in vcard[1]:
if isinstance(item, list) and len(item) > 3:
if item[0] == 'fn' and item[3]:
registrar = str(item[3])
break
# Check status for pending delete
status_list = data.get('status', [])
status_str = ' '.join(str(s).lower() for s in status_list)
is_dropping = any(x in status_str for x in [
'pending delete', 'pendingdelete',
'redemption period', 'redemptionperiod',
])
if is_dropping:
return DomainCheckResult(
domain=domain,
status=DomainStatus.DROPPING_SOON,
is_available=False,
registrar=registrar,
expiration_date=expiration_date,
creation_date=creation_date,
check_method="rdap_iana",
)
return DomainCheckResult(
domain=domain,
status=DomainStatus.TAKEN,
is_available=False,
registrar=registrar,
expiration_date=expiration_date,
creation_date=creation_date,
check_method="rdap_iana",
)
except httpx.TimeoutException:
logger.debug(f"IANA RDAP timeout for {domain}")
return None
except Exception as e:
logger.debug(f"IANA RDAP error for {domain}: {e}")
return None
async def _check_rdap(self, domain: str) -> Optional[DomainCheckResult]: async def _check_rdap(self, domain: str) -> Optional[DomainCheckResult]:
""" """
Check domain using RDAP (Registration Data Access Protocol). Check domain using RDAP (Registration Data Access Protocol) via whodap library.
Returns None if RDAP is not available for this TLD. Returns None if RDAP is not available for this TLD.
""" """
@ -319,7 +400,6 @@ class DomainChecker:
if response.events: if response.events:
for event in response.events: for event in response.events:
# Access event data from __dict__
event_dict = event.__dict__ if hasattr(event, '__dict__') else {} event_dict = event.__dict__ if hasattr(event, '__dict__') else {}
action = event_dict.get('eventAction', '') action = event_dict.get('eventAction', '')
date_str = event_dict.get('eventDate', '') date_str = event_dict.get('eventDate', '')
@ -366,12 +446,10 @@ class DomainChecker:
) )
except NotImplementedError: except NotImplementedError:
# No RDAP server for this TLD
logger.debug(f"No RDAP server for TLD .{tld}") logger.debug(f"No RDAP server for TLD .{tld}")
return None return None
except Exception as e: except Exception as e:
error_msg = str(e).lower() error_msg = str(e).lower()
# Check if domain is not found (available)
if 'not found' in error_msg or '404' in error_msg: if 'not found' in error_msg or '404' in error_msg:
return DomainCheckResult( return DomainCheckResult(
domain=domain, domain=domain,
@ -379,7 +457,7 @@ class DomainChecker:
is_available=True, is_available=True,
check_method="rdap", check_method="rdap",
) )
logger.warning(f"RDAP check failed for {domain}: {e}") logger.debug(f"RDAP check failed for {domain}: {e}")
return None return None
async def _check_whois(self, domain: str) -> DomainCheckResult: async def _check_whois(self, domain: str) -> DomainCheckResult:
@ -602,32 +680,35 @@ class DomainChecker:
# If custom RDAP fails, fall through to DNS check # If custom RDAP fails, fall through to DNS check
logger.info(f"Custom RDAP failed for {domain}, using DNS fallback") logger.info(f"Custom RDAP failed for {domain}, using DNS fallback")
# Priority 2: Try standard RDAP via whodap # Priority 2: Try IANA Bootstrap RDAP (works for ALL TLDs!)
if tld not in self.WHOIS_ONLY_TLDS and tld not in self.CUSTOM_RDAP_ENDPOINTS: if tld not in self.WHOIS_ONLY_TLDS and tld not in self.CUSTOM_RDAP_ENDPOINTS:
rdap_result = await self._check_rdap(domain) iana_result = await self._check_rdap_iana(domain)
if rdap_result: if iana_result:
# Validate with DNS if RDAP says available # Validate with DNS if RDAP says available
if rdap_result.is_available: if iana_result.is_available:
dns_available = await self._check_dns(domain) dns_available = await self._check_dns(domain)
if not dns_available: if not dns_available:
rdap_result.status = DomainStatus.TAKEN iana_result.status = DomainStatus.TAKEN
rdap_result.is_available = False iana_result.is_available = False
return rdap_result return iana_result
# Priority 3: Fall back to WHOIS (skip for TLDs that block it like .ch) # Priority 3: Fall back to WHOIS
if tld not in self.CUSTOM_RDAP_ENDPOINTS: if tld not in self.CUSTOM_RDAP_ENDPOINTS:
whois_result = await self._check_whois(domain) try:
whois_result = await self._check_whois(domain)
# Validate with DNS
if whois_result.is_available: # Validate with DNS
dns_available = await self._check_dns(domain) if whois_result.is_available:
if not dns_available: dns_available = await self._check_dns(domain)
whois_result.status = DomainStatus.TAKEN if not dns_available:
whois_result.is_available = False whois_result.status = DomainStatus.TAKEN
whois_result.is_available = False
return whois_result
return whois_result
except Exception as e:
logger.debug(f"WHOIS failed for {domain}: {e}")
# Final fallback: DNS-only check (for TLDs where everything else failed) # Final fallback: DNS-only check
dns_available = await self._check_dns(domain) dns_available = await self._check_dns(domain)
return DomainCheckResult( return DomainCheckResult(
domain=domain, domain=domain,
@ -711,24 +792,28 @@ async def check_all_domains(db):
taken = 0 taken = 0
errors = 0 errors = 0
from app.utils.datetime import to_naive_utc
for domain_obj in domains: for domain_obj in domains:
try: try:
check_result = await domain_checker.check_domain(domain_obj.domain) check_result = await domain_checker.check_domain(domain_obj.name)
# Update domain status # Update domain status
domain_obj.status = check_result.status.value domain_obj.status = check_result.status
domain_obj.is_available = check_result.is_available domain_obj.is_available = check_result.is_available
domain_obj.last_checked = datetime.utcnow() domain_obj.last_checked = datetime.utcnow()
domain_obj.last_check_method = check_result.check_method
if check_result.expiration_date: if check_result.expiration_date:
domain_obj.expiration_date = check_result.expiration_date domain_obj.expiration_date = to_naive_utc(check_result.expiration_date)
# Create check record # Create check record
domain_check = DomainCheck( domain_check = DomainCheck(
domain_id=domain_obj.id, domain_id=domain_obj.id,
status=check_result.status.value, status=check_result.status,
is_available=check_result.is_available, is_available=check_result.is_available,
check_method=check_result.check_method, response_data=str(check_result.to_dict()),
checked_at=datetime.utcnow(),
) )
db.add(domain_check) db.add(domain_check)
@ -738,10 +823,10 @@ async def check_all_domains(db):
else: else:
taken += 1 taken += 1
logger.info(f"Checked {domain_obj.domain}: {check_result.status.value}") logger.info(f"Checked {domain_obj.name}: {check_result.status.value}")
except Exception as e: except Exception as e:
logger.error(f"Error checking {domain_obj.domain}: {e}") logger.error(f"Error checking {domain_obj.name}: {e}")
errors += 1 errors += 1
await db.commit() await db.commit()

View File

@ -4,6 +4,8 @@ Drop Status Checker
Dedicated RDAP checker for dropped domains. Dedicated RDAP checker for dropped domains.
Correctly identifies pending_delete, redemption, and available status. Correctly identifies pending_delete, redemption, and available status.
Extracts deletion date for countdown display. Extracts deletion date for countdown display.
Uses IANA Bootstrap (rdap.org) as universal fallback for all TLDs.
""" """
import asyncio import asyncio
@ -13,27 +15,28 @@ from dataclasses import dataclass
from datetime import datetime from datetime import datetime
from typing import Optional from typing import Optional
from app.services.http_client_pool import get_rdap_http_client
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# RDAP endpoints for different TLDs # ============================================================================
RDAP_ENDPOINTS = { # RDAP CONFIGURATION
# ccTLDs # ============================================================================
# Preferred direct endpoints (faster, more reliable)
PREFERRED_ENDPOINTS = {
'ch': 'https://rdap.nic.ch/domain/', 'ch': 'https://rdap.nic.ch/domain/',
'li': 'https://rdap.nic.ch/domain/', 'li': 'https://rdap.nic.ch/domain/',
'de': 'https://rdap.denic.de/domain/', 'de': 'https://rdap.denic.de/domain/',
# gTLDs via CentralNic
'online': 'https://rdap.centralnic.com/online/domain/',
'xyz': 'https://rdap.centralnic.com/xyz/domain/',
'club': 'https://rdap.nic.club/domain/',
# gTLDs via Afilias/Donuts
'info': 'https://rdap.afilias.net/rdap/info/domain/',
'biz': 'https://rdap.afilias.net/rdap/biz/domain/',
'org': 'https://rdap.publicinterestregistry.org/rdap/org/domain/',
# Google TLDs
'dev': 'https://rdap.nic.google/domain/',
'app': 'https://rdap.nic.google/domain/',
} }
# IANA Bootstrap - works for ALL TLDs (redirects to correct registry)
IANA_BOOTSTRAP = 'https://rdap.org/domain/'
# Rate limiting settings
RDAP_TIMEOUT = 15 # seconds
RATE_LIMIT_DELAY = 0.3 # 300ms between requests = ~3 req/s
@dataclass @dataclass
class DropStatus: class DropStatus:
@ -44,174 +47,197 @@ class DropStatus:
can_register_now: bool can_register_now: bool
should_monitor: bool should_monitor: bool
message: str message: str
deletion_date: Optional[datetime] = None # When domain will be fully deleted deletion_date: Optional[datetime] = None
check_method: str = "rdap"
async def _make_rdap_request(client: httpx.AsyncClient, url: str, domain: str) -> Optional[dict]:
"""Make a single RDAP request with proper error handling."""
try:
resp = await client.get(url, timeout=RDAP_TIMEOUT)
if resp.status_code == 404:
# Domain not found = available
return {"_available": True, "_status_code": 404}
if resp.status_code == 200:
data = resp.json()
data["_status_code"] = 200
return data
if resp.status_code == 429:
logger.warning(f"RDAP rate limited for {domain}")
return {"_rate_limited": True, "_status_code": 429}
logger.warning(f"RDAP returned {resp.status_code} for {domain}")
return None
except httpx.TimeoutException:
logger.debug(f"RDAP timeout for {domain} at {url}")
return None
except Exception as e:
logger.debug(f"RDAP error for {domain}: {e}")
return None
async def check_drop_status(domain: str) -> DropStatus: async def check_drop_status(domain: str) -> DropStatus:
""" """
Check the real status of a dropped domain via RDAP. Check the real status of a dropped domain via RDAP.
Strategy:
1. Try preferred direct endpoint (if available for TLD)
2. Fall back to IANA Bootstrap (works for all TLDs)
Returns: Returns:
DropStatus with one of: DropStatus with one of:
- 'available': Domain can be registered NOW - 'available': Domain can be registered NOW
- 'dropping_soon': Domain is in pending delete/redemption (monitor it!) - 'dropping_soon': Domain is in pending delete/redemption
- 'taken': Domain was re-registered - 'taken': Domain was re-registered
- 'unknown': Could not determine status - 'unknown': Could not determine status
""" """
tld = domain.split('.')[-1].lower() tld = domain.split('.')[-1].lower()
endpoint = RDAP_ENDPOINTS.get(tld) # Try preferred endpoint first
if not endpoint: data = None
# Try generic lookup check_method = "rdap"
logger.warning(f"No RDAP endpoint for .{tld}, returning unknown") client = await get_rdap_http_client()
if tld in PREFERRED_ENDPOINTS:
url = f"{PREFERRED_ENDPOINTS[tld]}{domain}"
data = await _make_rdap_request(client, url, domain)
check_method = f"rdap_{tld}"
# Fall back to IANA Bootstrap if no data yet
if data is None:
url = f"{IANA_BOOTSTRAP}{domain}"
data = await _make_rdap_request(client, url, domain)
check_method = "rdap_iana"
# Still no data? Return unknown
if data is None:
return DropStatus( return DropStatus(
domain=domain, domain=domain,
status='unknown', status='unknown',
rdap_status=[], rdap_status=[],
can_register_now=False, can_register_now=False,
should_monitor=False, should_monitor=True,
message=f"No RDAP endpoint for .{tld}" message="RDAP check failed - will retry later",
check_method="failed",
) )
url = f"{endpoint}{domain}" # Rate limited
if data.get("_rate_limited"):
return DropStatus(
domain=domain,
status='unknown',
rdap_status=[],
can_register_now=False,
should_monitor=True,
message="Rate limited - will retry later",
check_method="rate_limited",
)
try: # Domain available (404)
async with httpx.AsyncClient(timeout=10) as client: if data.get("_available"):
resp = await client.get(url)
# 404 = Domain not found = AVAILABLE!
if resp.status_code == 404:
return DropStatus(
domain=domain,
status='available',
rdap_status=[],
can_register_now=True,
should_monitor=False,
message="Domain is available for registration!"
)
# 200 = Domain exists in registry
if resp.status_code == 200:
data = resp.json()
rdap_status = data.get('status', [])
status_lower = ' '.join(str(s).lower() for s in rdap_status)
# Extract deletion date from events
deletion_date = None
events = data.get('events', [])
for event in events:
action = event.get('eventAction', '').lower()
date_str = event.get('eventDate', '')
if action in ('deletion', 'expiration') and date_str:
try:
# Parse ISO date
deletion_date = datetime.fromisoformat(date_str.replace('Z', '+00:00'))
except (ValueError, TypeError):
pass
# Check for pending delete / redemption status
is_pending = any(x in status_lower for x in [
'pending delete', 'pendingdelete',
'pending purge', 'pendingpurge',
'redemption period', 'redemptionperiod',
'pending restore', 'pendingrestore',
])
if is_pending:
return DropStatus(
domain=domain,
status='dropping_soon',
rdap_status=rdap_status,
can_register_now=False,
should_monitor=True,
message="Domain is being deleted. Track it to get notified when available!",
deletion_date=deletion_date,
)
# Domain is actively registered
return DropStatus(
domain=domain,
status='taken',
rdap_status=rdap_status,
can_register_now=False,
should_monitor=False,
message="Domain was re-registered",
deletion_date=None,
)
# Other status code
logger.warning(f"RDAP returned {resp.status_code} for {domain}")
return DropStatus(
domain=domain,
status='unknown',
rdap_status=[],
can_register_now=False,
should_monitor=False,
message=f"RDAP returned HTTP {resp.status_code}"
)
except httpx.TimeoutException:
logger.warning(f"RDAP timeout for {domain}")
return DropStatus( return DropStatus(
domain=domain, domain=domain,
status='unknown', status='available',
rdap_status=[], rdap_status=[],
can_register_now=False, can_register_now=True,
should_monitor=False, should_monitor=False,
message="RDAP timeout" message="Domain is available for registration!",
check_method=check_method,
) )
except Exception as e:
logger.warning(f"RDAP error for {domain}: {e}") # Domain exists - parse status
rdap_status = data.get('status', [])
status_lower = ' '.join(str(s).lower() for s in rdap_status)
# Extract deletion date from events
deletion_date = None
events = data.get('events', [])
for event in events:
action = event.get('eventAction', '').lower()
date_str = event.get('eventDate', '')
if action in ('deletion', 'expiration') and date_str:
try:
deletion_date = datetime.fromisoformat(date_str.replace('Z', '+00:00'))
except (ValueError, TypeError):
pass
# Check for pending delete / redemption status
is_pending = any(x in status_lower for x in [
'pending delete', 'pendingdelete',
'pending purge', 'pendingpurge',
'redemption period', 'redemptionperiod',
'pending restore', 'pendingrestore',
'pending renewal', 'pendingrenewal',
])
if is_pending:
return DropStatus( return DropStatus(
domain=domain, domain=domain,
status='unknown', status='dropping_soon',
rdap_status=[], rdap_status=rdap_status,
can_register_now=False, can_register_now=False,
should_monitor=False, should_monitor=True,
message=str(e) message="Domain is being deleted. Track it to get notified!",
deletion_date=deletion_date,
check_method=check_method,
) )
# Domain is actively registered
# Rate limiting: max requests per second per TLD return DropStatus(
RATE_LIMITS = { domain=domain,
'default': 5, # 5 requests per second status='taken',
'ch': 10, # Swiss registry is faster rdap_status=rdap_status,
'li': 10, can_register_now=False,
} should_monitor=False,
message="Domain was re-registered",
deletion_date=None,
check_method=check_method,
)
async def check_drops_batch( async def check_drops_batch(
domains: list[tuple[int, str]], # List of (id, full_domain) domains: list[tuple[int, str]],
delay_between_requests: float = 0.2, # 200ms = 5 req/s delay_between_requests: float = RATE_LIMIT_DELAY,
max_concurrent: int = 3,
) -> list[tuple[int, DropStatus]]: ) -> list[tuple[int, DropStatus]]:
""" """
Check multiple drops with rate limiting. Check multiple drops with rate limiting and concurrency control.
Args: Args:
domains: List of (drop_id, full_domain) tuples domains: List of (drop_id, full_domain) tuples
delay_between_requests: Seconds to wait between requests (default 200ms) delay_between_requests: Seconds to wait between requests
max_concurrent: Maximum concurrent requests
Returns: Returns:
List of (drop_id, DropStatus) tuples List of (drop_id, DropStatus) tuples
""" """
semaphore = asyncio.Semaphore(max_concurrent)
results = [] results = []
for drop_id, domain in domains: async def check_with_semaphore(drop_id: int, domain: str) -> tuple[int, DropStatus]:
try: async with semaphore:
status = await check_drop_status(domain) try:
results.append((drop_id, status)) status = await check_drop_status(domain)
except Exception as e: await asyncio.sleep(delay_between_requests)
logger.error(f"Batch check failed for {domain}: {e}") return (drop_id, status)
results.append((drop_id, DropStatus( except Exception as e:
domain=domain, logger.error(f"Batch check failed for {domain}: {e}")
status='unknown', return (drop_id, DropStatus(
rdap_status=[], domain=domain,
can_register_now=False, status='unknown',
should_monitor=False, rdap_status=[],
message=str(e), can_register_now=False,
))) should_monitor=False,
message=str(e),
# Rate limit check_method="error",
await asyncio.sleep(delay_between_requests) ))
return results # Run with limited concurrency
tasks = [check_with_semaphore(drop_id, domain) for drop_id, domain in domains]
results = await asyncio.gather(*tasks)
return list(results)

View File

@ -727,5 +727,63 @@ class EmailService:
) )
@staticmethod
async def send_ops_alert(
alert_type: str,
title: str,
details: str,
severity: str = "warning", # info, warning, error, critical
) -> bool:
"""
Send operational alert to admin email.
Used for:
- Zone sync failures
- Database connection issues
- Scheduler job failures
- Security incidents
"""
settings = get_settings()
admin_email = settings.smtp_from_email # Send to ourselves for now
# Build HTML content
severity_colors = {
"info": "#3b82f6",
"warning": "#f59e0b",
"error": "#ef4444",
"critical": "#dc2626",
}
color = severity_colors.get(severity, "#6b7280")
html = f"""
<div style="font-family: system-ui, sans-serif; max-width: 600px; margin: 0 auto; background: #0a0a0a; color: #fff; padding: 24px;">
<div style="border-left: 4px solid {color}; padding-left: 16px; margin-bottom: 24px;">
<h1 style="margin: 0 0 8px 0; font-size: 18px; color: {color}; text-transform: uppercase;">
[{severity.upper()}] {alert_type}
</h1>
<h2 style="margin: 0; font-size: 24px; color: #fff;">{title}</h2>
</div>
<div style="background: #111; padding: 16px; border: 1px solid #222; font-family: monospace; font-size: 13px; white-space: pre-wrap;">
{details}
</div>
<div style="margin-top: 24px; font-size: 12px; color: #666;">
<p>Timestamp: {datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S UTC")}</p>
<p>Server: pounce.ch</p>
</div>
</div>
"""
subject = f"[POUNCE OPS] {severity.upper()}: {title}"
return await EmailService.send_email(
to_email=admin_email,
subject=subject,
html_content=html,
text_content=f"[{severity.upper()}] {alert_type}: {title}\n\n{details}",
)
# Global instance # Global instance
email_service = EmailService() email_service = EmailService()

View File

@ -0,0 +1,70 @@
"""
Shared HTTP clients for performance.
Why:
- Creating a new httpx.AsyncClient per request is expensive (TLS handshakes, no connection reuse).
- For high-frequency lookups (RDAP), we keep one pooled AsyncClient per process.
Notes:
- Per-request timeouts can still be overridden in client.get(..., timeout=...).
- Call close_* on shutdown for clean exit (optional but recommended).
"""
from __future__ import annotations
import asyncio
from typing import Optional
import httpx
_rdap_client: Optional[httpx.AsyncClient] = None
_rdap_client_lock = asyncio.Lock()
def _rdap_limits() -> httpx.Limits:
# Conservative but effective defaults (works well for bursty traffic).
return httpx.Limits(max_connections=50, max_keepalive_connections=20, keepalive_expiry=30.0)
def _rdap_timeout() -> httpx.Timeout:
# Overall timeout can be overridden per request.
return httpx.Timeout(15.0, connect=5.0)
async def get_rdap_http_client() -> httpx.AsyncClient:
"""
Get a shared httpx.AsyncClient for RDAP requests.
Safe for concurrent use within the same event loop.
"""
global _rdap_client
if _rdap_client is not None and not _rdap_client.is_closed:
return _rdap_client
async with _rdap_client_lock:
if _rdap_client is not None and not _rdap_client.is_closed:
return _rdap_client
_rdap_client = httpx.AsyncClient(
timeout=_rdap_timeout(),
follow_redirects=True,
limits=_rdap_limits(),
headers={
# Be a good citizen; many registries/redirectors are sensitive.
"User-Agent": "pounce/1.0 (+https://pounce.ch)",
"Accept": "application/rdap+json, application/json",
},
)
return _rdap_client
async def close_rdap_http_client() -> None:
"""Close the shared RDAP client (best-effort)."""
global _rdap_client
if _rdap_client is None:
return
try:
if not _rdap_client.is_closed:
await _rdap_client.aclose()
finally:
_rdap_client = None

View File

@ -15,30 +15,17 @@ from pathlib import Path
from typing import Optional from typing import Optional
from sqlalchemy import select, func from sqlalchemy import select, func
from sqlalchemy.dialects.postgresql import insert as pg_insert
from sqlalchemy.dialects.sqlite import insert as sqlite_insert
from sqlalchemy.ext.asyncio import AsyncSession from sqlalchemy.ext.asyncio import AsyncSession
from app.config import get_settings
from app.models.zone_file import ZoneSnapshot, DroppedDomain from app.models.zone_file import ZoneSnapshot, DroppedDomain
from app.utils.datetime import to_iso_utc, to_naive_utc
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# ============================================================================
# TSIG KEYS (from Switch.ch documentation)
# ============================================================================
TSIG_KEYS = {
"ch": {
"name": "tsig-zonedata-ch-public-21-01",
"algorithm": "hmac-sha512",
"secret": "stZwEGApYumtXkh73qMLPqfbIDozWKZLkqRvcjKSpRnsor6A6MxixRL6C2HeSVBQNfMW4wer+qjS0ZSfiWiJ3Q=="
},
"li": {
"name": "tsig-zonedata-li-public-21-01",
"algorithm": "hmac-sha512",
"secret": "t8GgeCn+fhPaj+cRy1epox2Vj4hZ45ax6v3rQCkkfIQNg5fsxuU23QM5mzz+BxJ4kgF/jiQyBDBvL+XWPE6oCQ=="
}
}
ZONE_SERVER = "zonedata.switch.ch" ZONE_SERVER = "zonedata.switch.ch"
# ============================================================================ # ============================================================================
@ -49,18 +36,36 @@ class ZoneFileService:
"""Service for fetching and analyzing zone files""" """Service for fetching and analyzing zone files"""
def __init__(self, data_dir: Optional[Path] = None): def __init__(self, data_dir: Optional[Path] = None):
from app.config import get_settings
settings = get_settings() settings = get_settings()
self.data_dir = data_dir or Path(settings.switch_data_dir) self.data_dir = data_dir or Path(settings.switch_data_dir)
self.data_dir.mkdir(parents=True, exist_ok=True) self.data_dir.mkdir(parents=True, exist_ok=True)
self._settings = settings
# Store daily snapshots for N days (premium reliability)
self.snapshots_dir = self.data_dir / "snapshots"
self.snapshots_dir.mkdir(parents=True, exist_ok=True)
def _get_tsig_config(self, tld: str) -> dict:
"""Resolve TSIG config from settings/env (no secrets in git)."""
if tld == "ch":
return {
"name": self._settings.switch_tsig_ch_name,
"algorithm": self._settings.switch_tsig_ch_algorithm,
"secret": self._settings.switch_tsig_ch_secret,
}
if tld == "li":
return {
"name": self._settings.switch_tsig_li_name,
"algorithm": self._settings.switch_tsig_li_algorithm,
"secret": self._settings.switch_tsig_li_secret,
}
raise ValueError(f"Unknown TLD: {tld}")
def _get_key_file_path(self, tld: str) -> Path: def _get_key_file_path(self, tld: str) -> Path:
"""Generate TSIG key file for dig command""" """Generate TSIG key file for dig command"""
key_path = self.data_dir / f"{tld}_zonedata.key" key_path = self.data_dir / f"{tld}_zonedata.key"
key_info = TSIG_KEYS.get(tld) key_info = self._get_tsig_config(tld)
if not (key_info.get("secret") or "").strip():
if not key_info: raise RuntimeError(f"Missing Switch TSIG secret for .{tld} (set SWITCH_TSIG_{tld.upper()}_SECRET)")
raise ValueError(f"Unknown TLD: {tld}")
# Write TSIG key file in BIND format # Write TSIG key file in BIND format
key_content = f"""key "{key_info['name']}" {{ key_content = f"""key "{key_info['name']}" {{
@ -76,7 +81,7 @@ class ZoneFileService:
Fetch zone file via DNS AXFR transfer. Fetch zone file via DNS AXFR transfer.
Returns set of domain names (without TLD suffix). Returns set of domain names (without TLD suffix).
""" """
if tld not in TSIG_KEYS: if tld not in ("ch", "li"):
raise ValueError(f"Unsupported TLD: {tld}. Only 'ch' and 'li' are supported.") raise ValueError(f"Unsupported TLD: {tld}. Only 'ch' and 'li' are supported.")
logger.info(f"Starting zone transfer for .{tld}") logger.info(f"Starting zone transfer for .{tld}")
@ -143,22 +148,60 @@ class ZoneFileService:
async def get_previous_snapshot(self, db: AsyncSession, tld: str) -> Optional[set[str]]: async def get_previous_snapshot(self, db: AsyncSession, tld: str) -> Optional[set[str]]:
"""Load previous day's domain set from cache file""" """Load previous day's domain set from cache file"""
# Prefer most recent snapshot file before today (supports N-day retention)
tld_dir = self.snapshots_dir / tld
if tld_dir.exists():
candidates = sorted([p for p in tld_dir.glob("*.domains.txt") if p.is_file()])
if candidates:
# Pick the latest snapshot file (by name sort = date sort)
latest = candidates[-1]
try:
content = latest.read_text()
return set(line.strip() for line in content.splitlines() if line.strip())
except Exception as e:
logger.warning(f"Failed to load snapshot for .{tld} from {latest.name}: {e}")
# Fallback: legacy cache file
cache_file = self.data_dir / f"{tld}_domains.txt" cache_file = self.data_dir / f"{tld}_domains.txt"
if cache_file.exists(): if cache_file.exists():
try: try:
content = cache_file.read_text() content = cache_file.read_text()
return set(line.strip() for line in content.splitlines() if line.strip()) return set(line.strip() for line in content.splitlines() if line.strip())
except Exception as e: except Exception as e:
logger.warning(f"Failed to load cache for .{tld}: {e}") logger.warning(f"Failed to load cache for .{tld}: {e}")
return None return None
def _cleanup_snapshot_files(self, tld: str) -> None:
"""Delete snapshot files older than retention window (best-effort)."""
keep_days = int(self._settings.zone_retention_days or 3)
cutoff = datetime.utcnow().date() - timedelta(days=keep_days)
tld_dir = self.snapshots_dir / tld
if not tld_dir.exists():
return
for p in tld_dir.glob("*.domains.txt"):
try:
# filename: YYYY-MM-DD.domains.txt
date_part = p.name.split(".")[0]
snap_date = datetime.fromisoformat(date_part).date()
if snap_date < cutoff:
p.unlink(missing_ok=True)
except Exception:
# Don't let cleanup break sync
continue
async def save_snapshot(self, db: AsyncSession, tld: str, domains: set[str]): async def save_snapshot(self, db: AsyncSession, tld: str, domains: set[str]):
"""Save current snapshot to cache and database""" """Save current snapshot to cache and database"""
# Save to cache file # Save to legacy cache file (fast path)
cache_file = self.data_dir / f"{tld}_domains.txt" cache_file = self.data_dir / f"{tld}_domains.txt"
cache_file.write_text("\n".join(sorted(domains))) cache_file.write_text("\n".join(sorted(domains)))
# Save a daily snapshot file for retention/debugging
tld_dir = self.snapshots_dir / tld
tld_dir.mkdir(parents=True, exist_ok=True)
today_str = datetime.utcnow().date().isoformat()
snapshot_file = tld_dir / f"{today_str}.domains.txt"
snapshot_file.write_text("\n".join(sorted(domains)))
self._cleanup_snapshot_files(tld)
# Save metadata to database # Save metadata to database
checksum = self.compute_checksum(domains) checksum = self.compute_checksum(domains)
@ -181,90 +224,70 @@ class ZoneFileService:
current: set[str] current: set[str]
) -> list[dict]: ) -> list[dict]:
""" """
Find dropped domains and verify they are ACTUALLY available before storing. Find dropped domains and store them directly.
Zone file drops are often immediately re-registered by drop-catching services, NOTE: We do NOT verify availability via RDAP here to avoid rate limits/bans.
so we must verify availability before storing to avoid showing unavailable domains. Zone file diff is already a reliable signal that the domain was dropped.
""" """
from app.services.domain_checker import domain_checker
dropped = previous - current dropped = previous - current
if not dropped: if not dropped:
logger.info(f"No dropped domains found for .{tld}") logger.info(f"No dropped domains found for .{tld}")
return [] return []
logger.info(f"Found {len(dropped)} potential drops for .{tld}, verifying availability...") logger.info(f"Found {len(dropped):,} dropped domains for .{tld}, saving to database...")
today = datetime.utcnow().replace(hour=0, minute=0, second=0, microsecond=0) today = datetime.utcnow().replace(hour=0, minute=0, second=0, microsecond=0)
# Filter to valuable domains first (short, no numbers, no hyphens) dropped_list = list(dropped)
# This reduces the number of availability checks needed
valuable_drops = [ rows = [
name for name in dropped {
if len(name) <= 10 and not name.isdigit() and '-' not in name "domain": name,
"tld": tld,
"dropped_date": today,
"length": len(name),
"is_numeric": name.isdigit(),
"has_hyphen": "-" in name,
"availability_status": "unknown",
}
for name in dropped_list
] ]
# Also include some longer domains (up to 500 total) # Bulk insert with conflict-ignore (needs unique index, see db_migrations.py)
other_drops = [ dialect = db.get_bind().dialect.name if db.get_bind() is not None else "unknown"
name for name in dropped batch_size = 5000
if name not in valuable_drops and len(name) <= 15 inserted_total = 0
][:max(0, 500 - len(valuable_drops))]
for i in range(0, len(rows), batch_size):
candidates = valuable_drops + other_drops batch = rows[i : i + batch_size]
logger.info(f"Checking availability of {len(candidates)} candidates (of {len(dropped)} total drops)")
if dialect == "postgresql":
# Verify availability and only store truly available domains stmt = (
dropped_records = [] pg_insert(DroppedDomain)
available_count = 0 .values(batch)
checked_count = 0 .on_conflict_do_nothing(index_elements=["domain", "tld", "dropped_date"])
)
for i, name in enumerate(candidates): elif dialect == "sqlite":
full_domain = f"{name}.{tld}" # SQLite: INSERT OR IGNORE (unique index is still respected)
stmt = sqlite_insert(DroppedDomain).values(batch).prefix_with("OR IGNORE")
try: else:
# Quick DNS check # Fallback: best-effort plain insert; duplicates are handled by DB constraints if present.
result = await domain_checker.check_domain(full_domain) stmt = pg_insert(DroppedDomain).values(batch)
checked_count += 1
result = await db.execute(stmt)
if result.is_available: # rowcount is driver-dependent; still useful for postgres/sqlite
available_count += 1 inserted_total += int(getattr(result, "rowcount", 0) or 0)
record = DroppedDomain( await db.commit()
domain=full_domain,
tld=tld, if (i + batch_size) % 20000 == 0:
dropped_date=today, logger.info(f"Saved {min(i + batch_size, len(rows)):,}/{len(rows):,} drops (inserted so far: {inserted_total:,})")
length=len(name),
is_numeric=name.isdigit(), logger.info(f"Zone drops for .{tld}: {inserted_total:,} inserted (out of {len(rows):,} diff)")
has_hyphen='-' in name
) # Return a small preview list (avoid returning huge payloads)
db.add(record) preview = [{"domain": f"{r['domain']}.{tld}", "length": r["length"]} for r in rows[:200]]
dropped_records.append({ return preview
"domain": full_domain,
"length": len(name),
"is_numeric": name.isdigit(),
"has_hyphen": '-' in name
})
# Progress log every 100 domains
if (i + 1) % 100 == 0:
logger.info(f"Verified {i + 1}/{len(candidates)}: {available_count} available so far")
# Small delay to avoid rate limiting
if i % 20 == 0:
await asyncio.sleep(0.1)
except Exception as e:
logger.warning(f"Error checking {full_domain}: {e}")
await db.commit()
logger.info(
f"Zone file drops for .{tld}: "
f"{checked_count} verified, {available_count} actually available, "
f"{len(dropped_records)} stored"
)
return dropped_records
async def run_daily_sync(self, db: AsyncSession, tld: str) -> dict: async def run_daily_sync(self, db: AsyncSession, tld: str) -> dict:
""" """
@ -370,13 +393,17 @@ async def get_dropped_domains(
"id": item.id, "id": item.id,
"domain": item.domain, "domain": item.domain,
"tld": item.tld, "tld": item.tld,
"dropped_date": item.dropped_date.isoformat(), "dropped_date": to_iso_utc(item.dropped_date),
"length": item.length, "length": item.length,
"is_numeric": item.is_numeric, "is_numeric": item.is_numeric,
"has_hyphen": item.has_hyphen, "has_hyphen": item.has_hyphen,
"availability_status": getattr(item, 'availability_status', 'unknown') or 'unknown', # Canonical status fields (keep old key for backwards compat)
"last_status_check": item.last_status_check.isoformat() if getattr(item, 'last_status_check', None) else None, "availability_status": getattr(item, "availability_status", "unknown") or "unknown",
"deletion_date": item.deletion_date.isoformat() if getattr(item, 'deletion_date', None) else None, "status": getattr(item, "availability_status", "unknown") or "unknown",
"last_status_check": to_iso_utc(item.last_status_check),
"status_checked_at": to_iso_utc(item.last_status_check),
"status_source": getattr(item, "last_check_method", None),
"deletion_date": to_iso_utc(item.deletion_date),
} }
for item in items for item in items
] ]
@ -479,8 +506,9 @@ async def verify_drops_availability(
Returns: Returns:
dict with stats: checked, available, dropping_soon, taken, errors dict with stats: checked, available, dropping_soon, taken, errors
""" """
from sqlalchemy import update from sqlalchemy import update, bindparam, case
from app.services.drop_status_checker import check_drop_status from app.services.drop_status_checker import check_drops_batch
from app.config import get_settings
logger.info(f"Starting drops status update (max {max_checks} checks)...") logger.info(f"Starting drops status update (max {max_checks} checks)...")
@ -488,16 +516,26 @@ async def verify_drops_availability(
cutoff = datetime.utcnow() - timedelta(hours=24) cutoff = datetime.utcnow() - timedelta(hours=24)
check_cutoff = datetime.utcnow() - timedelta(hours=2) # Re-check every 2 hours check_cutoff = datetime.utcnow() - timedelta(hours=2) # Re-check every 2 hours
# Prioritization (fast + predictable):
# 1) never checked first
# 2) then oldest check first
# 3) then unknown status
# 4) then shortest domains first
unknown_first = case((DroppedDomain.availability_status == "unknown", 0), else_=1)
never_checked_first = case((DroppedDomain.last_status_check.is_(None), 0), else_=1)
query = ( query = (
select(DroppedDomain) select(DroppedDomain)
.where(DroppedDomain.dropped_date >= cutoff) .where(DroppedDomain.dropped_date >= cutoff)
.where( .where(
(DroppedDomain.last_status_check == None) | # Never checked (DroppedDomain.last_status_check.is_(None)) # Never checked
(DroppedDomain.last_status_check < check_cutoff) # Not checked recently | (DroppedDomain.last_status_check < check_cutoff) # Not checked recently
) )
.order_by( .order_by(
DroppedDomain.availability_status.desc(), # Unknown first never_checked_first.asc(),
DroppedDomain.length.asc() # Then short domains DroppedDomain.last_status_check.asc().nullsfirst(),
unknown_first.asc(),
DroppedDomain.length.asc(),
) )
.limit(max_checks) .limit(max_checks)
) )
@ -513,41 +551,61 @@ async def verify_drops_availability(
stats = {"available": 0, "dropping_soon": 0, "taken": 0, "unknown": 0} stats = {"available": 0, "dropping_soon": 0, "taken": 0, "unknown": 0}
errors = 0 errors = 0
logger.info(f"Checking {len(drops)} dropped domains...") logger.info(f"Checking {len(drops)} dropped domains (batch mode)...")
for i, drop in enumerate(drops): settings = get_settings()
full_domain = f"{drop.domain}.{drop.tld}" delay = float(getattr(settings, "domain_check_delay_seconds", 0.3) or 0.3)
try: max_concurrent = int(getattr(settings, "domain_check_max_concurrent", 3) or 3)
status_result = await check_drop_status(full_domain)
# Build (drop_id, domain) tuples for batch checker
domain_tuples: list[tuple[int, str]] = [(d.id, f"{d.domain}.{d.tld}") for d in drops]
# Process in batches to bound memory + keep DB commits reasonable
now = datetime.utcnow()
for start in range(0, len(domain_tuples), batch_size):
batch = domain_tuples[start : start + batch_size]
results = await check_drops_batch(
batch,
delay_between_requests=delay,
max_concurrent=max_concurrent,
)
# Prepare bulk updates
updates: list[dict] = []
for drop_id, status_result in results:
checked += 1 checked += 1
stats[status_result.status] = stats.get(status_result.status, 0) + 1 stats[status_result.status] = stats.get(status_result.status, 0) + 1
# Update in DB updates.append(
await db.execute( {
update(DroppedDomain) "id": drop_id,
.where(DroppedDomain.id == drop.id) "availability_status": status_result.status,
.values( "rdap_status": str(status_result.rdap_status)[:255] if status_result.rdap_status else None,
availability_status=status_result.status, "last_status_check": now,
rdap_status=str(status_result.rdap_status)[:255] if status_result.rdap_status else None, "deletion_date": to_naive_utc(status_result.deletion_date),
last_status_check=datetime.utcnow(), "last_check_method": status_result.check_method,
deletion_date=status_result.deletion_date, }
)
) )
# Log progress every 25 domains # Bulk update using executemany
if (i + 1) % 25 == 0: stmt = (
logger.info(f"Checked {i + 1}/{len(drops)}: {stats}") update(DroppedDomain)
await db.commit() # Commit in batches .where(DroppedDomain.id == bindparam("id"))
.values(
# Rate limit: 200ms between requests (5 req/sec) availability_status=bindparam("availability_status"),
await asyncio.sleep(0.2) rdap_status=bindparam("rdap_status"),
last_status_check=bindparam("last_status_check"),
except Exception as e: deletion_date=bindparam("deletion_date"),
errors += 1 last_check_method=bindparam("last_check_method"),
logger.warning(f"Error checking {full_domain}: {e}") )
)
await db.execute(stmt, updates)
await db.commit()
logger.info(f"Checked {min(start + batch_size, len(domain_tuples))}/{len(domain_tuples)}: {stats}")
# Final commit # Final commit
await db.commit() # (already committed per batch)
logger.info( logger.info(
f"Drops status update complete: " f"Drops status update complete: "

View File

@ -44,16 +44,34 @@ def get_optimal_workers() -> int:
def get_ram_drive_path() -> Optional[Path]: def get_ram_drive_path() -> Optional[Path]:
""" """
Get path to RAM drive if available. Get path for temporary zone file processing.
Linux: /dev/shm (typically 50% of RAM)
macOS: /tmp is often memory-backed Priority:
1. CZDS_DATA_DIR environment variable (persistent storage)
2. /data/czds (Docker volume mount)
3. /tmp fallback
Note: We avoid /dev/shm in Docker as it's typically limited to 64MB.
With 1.7TB disk and NVMe, disk-based processing is fast enough.
""" """
# Linux RAM drive from app.config import get_settings
if os.path.exists("/dev/shm"):
shm_path = Path("/dev/shm/pounce_zones") # Use configured data directory (mounted volume)
settings = get_settings()
if settings.czds_data_dir:
data_path = Path(settings.czds_data_dir) / "tmp"
try: try:
shm_path.mkdir(parents=True, exist_ok=True) data_path.mkdir(parents=True, exist_ok=True)
return shm_path return data_path
except PermissionError:
pass
# Docker volume mount
if os.path.exists("/data/czds"):
data_path = Path("/data/czds/tmp")
try:
data_path.mkdir(parents=True, exist_ok=True)
return data_path
except PermissionError: except PermissionError:
pass pass

View File

@ -0,0 +1,2 @@
"""Shared utility helpers (small, dependency-free)."""

View File

@ -0,0 +1,34 @@
from __future__ import annotations
from datetime import datetime, timezone
def to_naive_utc(dt: datetime | None) -> datetime | None:
"""
Convert a timezone-aware datetime to naive UTC (tzinfo removed).
Our DB columns are DateTime without timezone. Persisting timezone-aware
datetimes can cause runtime errors (especially on Postgres).
"""
if dt is None:
return None
if dt.tzinfo is None:
return dt
return dt.astimezone(timezone.utc).replace(tzinfo=None)
def to_iso_utc(dt: datetime | None) -> str | None:
"""
Serialize a datetime as an ISO-8601 UTC string.
- If dt is timezone-aware: convert to UTC and use "Z".
- If dt is naive: treat it as UTC and use "Z".
"""
if dt is None:
return None
if dt.tzinfo is None:
dt = dt.replace(tzinfo=timezone.utc)
else:
dt = dt.astimezone(timezone.utc)
return dt.isoformat().replace("+00:00", "Z")

View File

@ -18,6 +18,7 @@ load_dotenv()
from app.config import get_settings from app.config import get_settings
from app.database import init_db from app.database import init_db
from app.scheduler import start_scheduler, stop_scheduler from app.scheduler import start_scheduler, stop_scheduler
from app.services.http_client_pool import close_rdap_http_client
logging.basicConfig( logging.basicConfig(
level=logging.INFO, level=logging.INFO,
@ -54,6 +55,7 @@ async def main() -> None:
await stop_event.wait() await stop_event.wait()
stop_scheduler() stop_scheduler()
await close_rdap_http_client()
logger.info("Scheduler stopped. Bye.") logger.info("Scheduler stopped. Bye.")

View File

@ -11,8 +11,9 @@ services:
- pounce-network - pounce-network
- supabase-network - supabase-network
environment: environment:
- DATABASE_URL=postgresql+asyncpg://pounce:PounceDB2024!@supabase-db-n0488s44osgoow4wgo04ogg0:5432/pounce # NOTE: Do NOT hardcode credentials in git.
- JWT_SECRET=${JWT_SECRET:-pounce-super-secret-jwt-key-2024-production} - DATABASE_URL=${DATABASE_URL}
- JWT_SECRET=${JWT_SECRET}
- FRONTEND_URL=http://pounce.185-142-213-170.sslip.io - FRONTEND_URL=http://pounce.185-142-213-170.sslip.io
- ENVIRONMENT=production - ENVIRONMENT=production
- ENABLE_SCHEDULER=true - ENABLE_SCHEDULER=true

View File

@ -15,7 +15,9 @@ COPY . .
# Build arguments # Build arguments
ARG NEXT_PUBLIC_API_URL ARG NEXT_PUBLIC_API_URL
ARG BACKEND_URL
ENV NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL} ENV NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL}
ENV BACKEND_URL=${BACKEND_URL}
ENV NODE_OPTIONS="--max-old-space-size=2048" ENV NODE_OPTIONS="--max-old-space-size=2048"
ENV NEXT_TELEMETRY_DISABLED=1 ENV NEXT_TELEMETRY_DISABLED=1

View File

@ -161,8 +161,12 @@ const nextConfig = {
// Proxy API requests to backend // Proxy API requests to backend
// This ensures /api/v1/* works regardless of how the server is accessed // This ensures /api/v1/* works regardless of how the server is accessed
async rewrites() { async rewrites() {
// Determine backend URL based on environment // In production (Docker), use internal container hostname
const backendUrl = process.env.BACKEND_URL || 'http://127.0.0.1:8000' // In development, use localhost
const isProduction = process.env.NODE_ENV === 'production'
const backendUrl = process.env.BACKEND_URL || (isProduction ? 'http://pounce-backend:8000' : 'http://127.0.0.1:8000')
console.log(`[Next.js Config] Backend URL: ${backendUrl}`)
return [ return [
{ {

View File

@ -5,6 +5,7 @@ import { useRouter } from 'next/navigation'
import { useStore } from '@/lib/store' import { useStore } from '@/lib/store'
import { api } from '@/lib/api' import { api } from '@/lib/api'
import { EarningsTab } from '@/components/admin/EarningsTab' import { EarningsTab } from '@/components/admin/EarningsTab'
import { ZonesTab } from '@/components/admin/ZonesTab'
import { PremiumTable, Badge, TableActionButton, StatCard } from '@/components/PremiumTable' import { PremiumTable, Badge, TableActionButton, StatCard } from '@/components/PremiumTable'
import { import {
Users, Users,
@ -56,7 +57,7 @@ import Image from 'next/image'
// TYPES // TYPES
// ============================================================================ // ============================================================================
type TabType = 'overview' | 'earnings' | 'telemetry' | 'users' | 'alerts' | 'newsletter' | 'tld' | 'auctions' | 'blog' | 'system' | 'activity' type TabType = 'overview' | 'earnings' | 'telemetry' | 'users' | 'alerts' | 'newsletter' | 'tld' | 'auctions' | 'blog' | 'system' | 'activity' | 'zones'
interface AdminStats { interface AdminStats {
users: { total: number; active: number; verified: number; new_this_week: number } users: { total: number; active: number; verified: number; new_this_week: number }
@ -89,6 +90,7 @@ const TABS: Array<{ id: TabType; label: string; icon: any; shortLabel?: string }
{ id: 'overview', label: 'Overview', icon: Activity, shortLabel: 'Overview' }, { id: 'overview', label: 'Overview', icon: Activity, shortLabel: 'Overview' },
{ id: 'earnings', label: 'Earnings', icon: DollarSign, shortLabel: 'Earnings' }, { id: 'earnings', label: 'Earnings', icon: DollarSign, shortLabel: 'Earnings' },
{ id: 'telemetry', label: 'Telemetry', icon: BarChart3, shortLabel: 'KPIs' }, { id: 'telemetry', label: 'Telemetry', icon: BarChart3, shortLabel: 'KPIs' },
{ id: 'zones', label: 'Zone Sync', icon: RefreshCw, shortLabel: 'Zones' },
{ id: 'users', label: 'Users', icon: Users, shortLabel: 'Users' }, { id: 'users', label: 'Users', icon: Users, shortLabel: 'Users' },
{ id: 'newsletter', label: 'Newsletter', icon: Mail, shortLabel: 'News' }, { id: 'newsletter', label: 'Newsletter', icon: Mail, shortLabel: 'News' },
{ id: 'tld', label: 'TLD Data', icon: Globe, shortLabel: 'TLD' }, { id: 'tld', label: 'TLD Data', icon: Globe, shortLabel: 'TLD' },
@ -638,6 +640,9 @@ export default function AdminPage() {
{/* Earnings Tab */} {/* Earnings Tab */}
{activeTab === 'earnings' && <EarningsTab />} {activeTab === 'earnings' && <EarningsTab />}
{/* Zones Tab */}
{activeTab === 'zones' && <ZonesTab />}
{/* Telemetry Tab */} {/* Telemetry Tab */}
{activeTab === 'telemetry' && telemetry && ( {activeTab === 'telemetry' && telemetry && (
<div className="space-y-6"> <div className="space-y-6">

View File

@ -42,6 +42,7 @@ import {
import clsx from 'clsx' import clsx from 'clsx'
import Link from 'next/link' import Link from 'next/link'
import Image from 'next/image' import Image from 'next/image'
import { daysUntil, formatCountdown } from '@/lib/time'
// ============================================================================ // ============================================================================
// ADD MODAL COMPONENT (like Portfolio) // ADD MODAL COMPONENT (like Portfolio)
@ -119,14 +120,6 @@ function AddModal({
// HELPERS // HELPERS
// ============================================================================ // ============================================================================
function getDaysUntilExpiry(expirationDate: string | null): number | null {
if (!expirationDate) return null
const expDate = new Date(expirationDate)
const now = new Date()
const diffTime = expDate.getTime() - now.getTime()
return Math.ceil(diffTime / (1000 * 60 * 60 * 24))
}
function formatExpiryDate(expirationDate: string | null): string { function formatExpiryDate(expirationDate: string | null): string {
if (!expirationDate) return '' if (!expirationDate) return ''
return new Date(expirationDate).toLocaleDateString('en-US', { month: 'short', day: 'numeric', year: 'numeric' }) return new Date(expirationDate).toLocaleDateString('en-US', { month: 'short', day: 'numeric', year: 'numeric' })
@ -147,7 +140,24 @@ const healthConfig: Record<HealthStatus, { label: string; color: string; bg: str
export default function WatchlistPage() { export default function WatchlistPage() {
const { domains, addDomain, deleteDomain, refreshDomain, updateDomain, subscription, user, logout, checkAuth } = useStore() const { domains, addDomain, deleteDomain, refreshDomain, updateDomain, subscription, user, logout, checkAuth } = useStore()
const { toast, showToast, hideToast } = useToast() const { toast, showToast, hideToast } = useToast()
const openAnalyze = useAnalyzePanelStore((s) => s.open) const openAnalyzePanel = useAnalyzePanelStore((s) => s.open)
// Wrapper to open analyze panel with domain status
const openAnalyze = useCallback((domainData: { name: string; status: string; is_available: boolean; expiration_date: string | null; deletion_date?: string | null }) => {
// Map domain status to drop status format
const statusMap: Record<string, 'available' | 'dropping_soon' | 'taken' | 'unknown'> = {
'available': 'available',
'dropping_soon': 'dropping_soon',
'taken': 'taken',
'error': 'unknown',
'unknown': 'unknown',
}
openAnalyzePanel(domainData.name, {
status: statusMap[domainData.status] || (domainData.is_available ? 'available' : 'taken'),
deletion_date: domainData.deletion_date || null,
is_drop: false,
})
}, [openAnalyzePanel])
// Modal state // Modal state
const [showAddModal, setShowAddModal] = useState(false) const [showAddModal, setShowAddModal] = useState(false)
@ -184,7 +194,7 @@ export default function WatchlistPage() {
const available = domains?.filter(d => d.is_available) || [] const available = domains?.filter(d => d.is_available) || []
const expiringSoon = domains?.filter(d => { const expiringSoon = domains?.filter(d => {
if (d.is_available || !d.expiration_date) return false if (d.is_available || !d.expiration_date) return false
const days = getDaysUntilExpiry(d.expiration_date) const days = daysUntil(d.expiration_date)
return days !== null && days <= 30 && days > 0 return days !== null && days <= 30 && days > 0
}) || [] }) || []
return { total: domains?.length || 0, available: available.length, expiring: expiringSoon.length } return { total: domains?.length || 0, available: available.length, expiring: expiringSoon.length }
@ -196,7 +206,7 @@ export default function WatchlistPage() {
let filtered = domains.filter(d => { let filtered = domains.filter(d => {
if (filter === 'available') return d.is_available if (filter === 'available') return d.is_available
if (filter === 'expiring') { if (filter === 'expiring') {
const days = getDaysUntilExpiry(d.expiration_date) const days = daysUntil(d.expiration_date)
return days !== null && days <= 30 && days > 0 return days !== null && days <= 30 && days > 0
} }
return true return true
@ -596,7 +606,18 @@ export default function WatchlistPage() {
const health = healthReports[domain.id] const health = healthReports[domain.id]
const healthStatus = health?.status || 'unknown' const healthStatus = health?.status || 'unknown'
const config = healthConfig[healthStatus] const config = healthConfig[healthStatus]
const days = getDaysUntilExpiry(domain.expiration_date) const days = daysUntil(domain.expiration_date)
// Domain status display config (consistent with DropsTab)
const domainStatus = domain.status || (domain.is_available ? 'available' : 'taken')
const transitionCountdown = domainStatus === 'dropping_soon' ? formatCountdown(domain.deletion_date ?? null) : null
const statusConfig = {
available: { label: 'AVAIL', color: 'text-accent', bg: 'bg-accent/5 border-accent/20' },
dropping_soon: { label: transitionCountdown ? `TRANSITION • ${transitionCountdown}` : 'TRANSITION', color: 'text-amber-400', bg: 'bg-amber-400/5 border-amber-400/20' },
taken: { label: 'TAKEN', color: 'text-white/40', bg: 'bg-white/5 border-white/10' },
error: { label: 'ERROR', color: 'text-rose-400', bg: 'bg-rose-400/5 border-rose-400/20' },
unknown: { label: 'CHECK', color: 'text-white/30', bg: 'bg-white/5 border-white/5' },
}[domainStatus] || { label: 'UNKNOWN', color: 'text-white/30', bg: 'bg-white/5 border-white/5' }
return ( return (
<div <div
@ -608,27 +629,27 @@ export default function WatchlistPage() {
<div className="flex items-start justify-between gap-4 mb-4"> <div className="flex items-start justify-between gap-4 mb-4">
<div className="min-w-0"> <div className="min-w-0">
<button <button
onClick={() => openAnalyze(domain.name)} onClick={() => openAnalyze(domain)}
className="text-lg font-bold text-white font-mono truncate block text-left hover:text-accent transition-colors" className="text-lg font-bold text-white font-mono truncate block text-left hover:text-accent transition-colors"
> >
{domain.name} {domain.name}
</button> </button>
<div className="flex items-center gap-2 mt-2 text-[10px] font-mono text-white/30 uppercase tracking-wider"> <div className="flex items-center gap-2 mt-2 text-[10px] font-mono text-white/30 uppercase tracking-wider">
<span className="bg-white/5 px-2 py-0.5 border border-white/5">{domain.registrar || 'Unknown'}</span> <span className="bg-white/5 px-2 py-0.5 border border-white/5">{domain.registrar || 'Unknown'}</span>
{days !== null && days <= 30 && days > 0 && ( {domainStatus === 'dropping_soon' && transitionCountdown ? (
<span className="text-amber-400 font-bold">drops in {transitionCountdown}</span>
) : days !== null && days <= 30 && days > 0 ? (
<span className="text-orange-400 font-bold">{days}d left</span> <span className="text-orange-400 font-bold">{days}d left</span>
)} ) : null}
</div> </div>
</div> </div>
<div className="text-right shrink-0"> <div className="text-right shrink-0">
<div className={clsx( <div className={clsx(
"text-[10px] font-mono px-2 py-0.5 mt-1 inline-block border", "text-[10px] font-mono px-2 py-0.5 mt-1 inline-block border",
domain.is_available statusConfig.color, statusConfig.bg
? "text-accent bg-accent/5 border-accent/20"
: "text-white/30 bg-white/5 border-white/5"
)}> )}>
{domain.is_available ? 'AVAIL' : 'TAKEN'} {statusConfig.label}
</div> </div>
</div> </div>
</div> </div>
@ -667,7 +688,7 @@ export default function WatchlistPage() {
)} )}
<button <button
onClick={() => openAnalyze(domain.name)} onClick={() => openAnalyze(domain)}
className="w-14 h-12 border border-white/10 text-white/50 flex items-center justify-center hover:text-accent hover:border-accent/30 hover:bg-accent/5 transition-all" className="w-14 h-12 border border-white/10 text-white/50 flex items-center justify-center hover:text-accent hover:border-accent/30 hover:bg-accent/5 transition-all"
> >
<Shield className="w-5 h-5" /> <Shield className="w-5 h-5" />
@ -695,7 +716,7 @@ export default function WatchlistPage() {
{/* Domain */} {/* Domain */}
<div className="flex items-center gap-3 min-w-0"> <div className="flex items-center gap-3 min-w-0">
<button <button
onClick={() => openAnalyze(domain.name)} onClick={() => openAnalyze(domain)}
className="text-sm font-bold text-white font-mono truncate group-hover:text-accent transition-colors text-left" className="text-sm font-bold text-white font-mono truncate group-hover:text-accent transition-colors text-left"
title="Analyze" title="Analyze"
> >
@ -710,11 +731,9 @@ export default function WatchlistPage() {
<div className="flex justify-center"> <div className="flex justify-center">
<span className={clsx( <span className={clsx(
"text-[10px] font-mono font-bold uppercase px-2.5 py-1.5 border", "text-[10px] font-mono font-bold uppercase px-2.5 py-1.5 border",
domain.is_available statusConfig.color, statusConfig.bg
? "text-accent bg-accent/10 border-accent/30"
: "text-white/40 bg-white/5 border-white/10"
)}> )}>
{domain.is_available ? 'AVAIL' : 'TAKEN'} {statusConfig.label}
</span> </span>
</div> </div>
@ -740,7 +759,9 @@ export default function WatchlistPage() {
{/* Expires */} {/* Expires */}
<div className="text-center text-sm font-mono"> <div className="text-center text-sm font-mono">
{days !== null && days <= 30 && days > 0 ? ( {domainStatus === 'dropping_soon' && transitionCountdown ? (
<span className="text-amber-400 font-bold">{transitionCountdown}</span>
) : days !== null && days <= 30 && days > 0 ? (
<span className="text-orange-400 font-bold">{days}d</span> <span className="text-orange-400 font-bold">{days}d</span>
) : ( ) : (
<span className="text-white/50">{formatExpiryDate(domain.expiration_date)}</span> <span className="text-white/50">{formatExpiryDate(domain.expiration_date)}</span>
@ -792,7 +813,7 @@ export default function WatchlistPage() {
<RefreshCw className={clsx("w-4 h-4", refreshingId === domain.id && "animate-spin")} /> <RefreshCw className={clsx("w-4 h-4", refreshingId === domain.id && "animate-spin")} />
</button> </button>
<button <button
onClick={() => openAnalyze(domain.name)} onClick={() => openAnalyze(domain)}
title="Analyze" title="Analyze"
className="w-10 h-10 flex items-center justify-center text-white/40 hover:text-accent border border-white/10 hover:bg-accent/10 hover:border-accent/20 transition-all" className="w-10 h-10 flex items-center justify-center text-white/40 hover:text-accent border border-white/10 hover:bg-accent/10 hover:border-accent/20 transition-all"
> >

View File

@ -0,0 +1,276 @@
'use client'
import { useState, useEffect, useCallback } from 'react'
import { api } from '@/lib/api'
import {
RefreshCw,
Globe,
CheckCircle2,
AlertTriangle,
XCircle,
Loader2,
Play,
Clock,
Database,
TrendingUp,
} from 'lucide-react'
import clsx from 'clsx'
interface ZoneStatus {
tld: string
last_sync: string | null
domain_count: number
drops_today: number
total_drops: number
status: 'healthy' | 'stale' | 'never'
}
interface ZoneSyncStatus {
zones: ZoneStatus[]
summary: {
total_zones: number
healthy: number
stale: number
never_synced: number
total_drops_today: number
total_drops_all: number
}
}
export function ZonesTab() {
const [status, setStatus] = useState<ZoneSyncStatus | null>(null)
const [loading, setLoading] = useState(true)
const [syncingSwitch, setSyncingSwitch] = useState(false)
const [syncingCzds, setSyncingCzds] = useState(false)
const [message, setMessage] = useState<{ type: 'success' | 'error'; text: string } | null>(null)
const fetchStatus = useCallback(async () => {
try {
const data = await api.request<ZoneSyncStatus>('/admin/zone-sync/status')
setStatus(data)
} catch (e) {
console.error('Failed to fetch zone status:', e)
} finally {
setLoading(false)
}
}, [])
useEffect(() => {
fetchStatus()
// Auto-refresh every 30 seconds
const interval = setInterval(fetchStatus, 30000)
return () => clearInterval(interval)
}, [fetchStatus])
const triggerSwitchSync = async () => {
if (syncingSwitch) return
setSyncingSwitch(true)
setMessage(null)
try {
await api.request('/admin/zone-sync/switch', { method: 'POST' })
setMessage({ type: 'success', text: 'Switch.ch sync started! Check logs for progress.' })
// Refresh status after a delay
setTimeout(fetchStatus, 5000)
} catch (e) {
setMessage({ type: 'error', text: e instanceof Error ? e.message : 'Sync failed' })
} finally {
setSyncingSwitch(false)
}
}
const triggerCzdsSync = async () => {
if (syncingCzds) return
setSyncingCzds(true)
setMessage(null)
try {
await api.request('/admin/zone-sync/czds', { method: 'POST' })
setMessage({ type: 'success', text: 'ICANN CZDS sync started (parallel mode)! Check logs for progress.' })
// Refresh status after a delay
setTimeout(fetchStatus, 5000)
} catch (e) {
setMessage({ type: 'error', text: e instanceof Error ? e.message : 'Sync failed' })
} finally {
setSyncingCzds(false)
}
}
const formatDate = (dateStr: string | null) => {
if (!dateStr) return 'Never'
const date = new Date(dateStr)
const now = new Date()
const diff = now.getTime() - date.getTime()
const hours = Math.floor(diff / (1000 * 60 * 60))
if (hours < 1) return 'Just now'
if (hours < 24) return `${hours}h ago`
return date.toLocaleDateString('en-US', { month: 'short', day: 'numeric', hour: '2-digit', minute: '2-digit' })
}
const getStatusIcon = (s: string) => {
switch (s) {
case 'healthy': return <CheckCircle2 className="w-4 h-4 text-accent" />
case 'stale': return <AlertTriangle className="w-4 h-4 text-amber-400" />
default: return <XCircle className="w-4 h-4 text-rose-400" />
}
}
if (loading) {
return (
<div className="flex items-center justify-center py-20">
<Loader2 className="w-8 h-8 text-accent animate-spin" />
</div>
)
}
return (
<div className="space-y-6">
{/* Summary Cards */}
<div className="grid grid-cols-2 md:grid-cols-4 gap-4">
<div className="bg-white/[0.02] border border-white/[0.08] p-4">
<div className="flex items-center gap-2 text-white/40 text-xs font-mono uppercase mb-2">
<Globe className="w-4 h-4" />
Zones
</div>
<div className="text-2xl font-bold text-white">{status?.summary.total_zones || 0}</div>
<div className="text-xs text-white/30">
{status?.summary.healthy || 0} healthy
</div>
</div>
<div className="bg-white/[0.02] border border-white/[0.08] p-4">
<div className="flex items-center gap-2 text-white/40 text-xs font-mono uppercase mb-2">
<TrendingUp className="w-4 h-4" />
Today
</div>
<div className="text-2xl font-bold text-accent">{status?.summary.total_drops_today?.toLocaleString() || 0}</div>
<div className="text-xs text-white/30">drops detected</div>
</div>
<div className="bg-white/[0.02] border border-white/[0.08] p-4">
<div className="flex items-center gap-2 text-white/40 text-xs font-mono uppercase mb-2">
<Database className="w-4 h-4" />
Total
</div>
<div className="text-2xl font-bold text-white">{status?.summary.total_drops_all?.toLocaleString() || 0}</div>
<div className="text-xs text-white/30">drops in database</div>
</div>
<div className="bg-white/[0.02] border border-white/[0.08] p-4">
<div className="flex items-center gap-2 text-white/40 text-xs font-mono uppercase mb-2">
<Clock className="w-4 h-4" />
Status
</div>
<div className="flex items-center gap-2">
{status?.summary.stale || status?.summary.never_synced ? (
<>
<AlertTriangle className="w-5 h-5 text-amber-400" />
<span className="text-amber-400 font-bold">Needs Attention</span>
</>
) : (
<>
<CheckCircle2 className="w-5 h-5 text-accent" />
<span className="text-accent font-bold">All Healthy</span>
</>
)}
</div>
</div>
</div>
{/* Action Buttons */}
<div className="flex flex-wrap gap-4">
<button
onClick={triggerSwitchSync}
disabled={syncingSwitch}
className="flex items-center gap-2 px-4 py-3 bg-white/[0.05] border border-white/[0.08] text-white hover:bg-white/[0.08] transition-colors disabled:opacity-50"
>
{syncingSwitch ? <Loader2 className="w-4 h-4 animate-spin" /> : <Play className="w-4 h-4" />}
Sync Switch.ch (.ch, .li)
</button>
<button
onClick={triggerCzdsSync}
disabled={syncingCzds}
className="flex items-center gap-2 px-4 py-3 bg-accent/10 border border-accent/30 text-accent hover:bg-accent/20 transition-colors disabled:opacity-50"
>
{syncingCzds ? <Loader2 className="w-4 h-4 animate-spin" /> : <Play className="w-4 h-4" />}
Sync ICANN CZDS (gTLDs)
</button>
<button
onClick={fetchStatus}
className="flex items-center gap-2 px-4 py-3 border border-white/[0.08] text-white/60 hover:text-white hover:bg-white/[0.05] transition-colors"
>
<RefreshCw className="w-4 h-4" />
Refresh Status
</button>
</div>
{/* Message */}
{message && (
<div className={clsx(
"p-4 border",
message.type === 'success' ? "bg-accent/10 border-accent/30 text-accent" : "bg-rose-500/10 border-rose-500/30 text-rose-400"
)}>
{message.text}
</div>
)}
{/* Zone Table */}
<div className="border border-white/[0.08] overflow-hidden">
<div className="grid grid-cols-[80px_1fr_120px_120px_120px_100px] gap-4 px-4 py-3 bg-white/[0.02] text-xs font-mono text-white/40 uppercase tracking-wider border-b border-white/[0.08]">
<div>TLD</div>
<div>Last Sync</div>
<div className="text-right">Domains</div>
<div className="text-right">Today</div>
<div className="text-right">Total Drops</div>
<div className="text-center">Status</div>
</div>
<div className="divide-y divide-white/[0.04]">
{status?.zones.map((zone) => (
<div
key={zone.tld}
className="grid grid-cols-[80px_1fr_120px_120px_120px_100px] gap-4 px-4 py-3 items-center hover:bg-white/[0.02] transition-colors"
>
<div className="font-mono font-bold text-white">.{zone.tld}</div>
<div className="text-sm text-white/60">{formatDate(zone.last_sync)}</div>
<div className="text-right font-mono text-white/60">{zone.domain_count?.toLocaleString() || '-'}</div>
<div className="text-right font-mono text-accent font-bold">{zone.drops_today?.toLocaleString() || '0'}</div>
<div className="text-right font-mono text-white/40">{zone.total_drops?.toLocaleString() || '0'}</div>
<div className="flex items-center justify-center gap-2">
{getStatusIcon(zone.status)}
<span className={clsx(
"text-xs font-mono uppercase",
zone.status === 'healthy' ? "text-accent" : zone.status === 'stale' ? "text-amber-400" : "text-rose-400"
)}>
{zone.status}
</span>
</div>
</div>
))}
</div>
</div>
{/* Schedule Info */}
<div className="bg-white/[0.02] border border-white/[0.08] p-4">
<h3 className="text-sm font-bold text-white mb-3">Automatic Sync Schedule</h3>
<div className="grid grid-cols-1 md:grid-cols-2 gap-4 text-sm">
<div className="flex items-start gap-3">
<Clock className="w-4 h-4 text-white/40 mt-0.5" />
<div>
<div className="text-white font-medium">Switch.ch (.ch, .li)</div>
<div className="text-white/40">Daily at 05:00 UTC (06:00 CH)</div>
</div>
</div>
<div className="flex items-start gap-3">
<Clock className="w-4 h-4 text-white/40 mt-0.5" />
<div>
<div className="text-white font-medium">ICANN CZDS (gTLDs)</div>
<div className="text-white/40">Daily at 06:00 UTC (07:00 CH)</div>
</div>
</div>
</div>
</div>
</div>
)
}

View File

@ -22,6 +22,7 @@ import {
} from 'lucide-react' } from 'lucide-react'
import { api } from '@/lib/api' import { api } from '@/lib/api'
import { useAnalyzePanelStore } from '@/lib/analyze-store' import { useAnalyzePanelStore } from '@/lib/analyze-store'
import { formatCountdown, parseIsoAsUtc } from '@/lib/time'
import type { AnalyzeResponse, AnalyzeSection, AnalyzeItem } from '@/components/analyze/types' import type { AnalyzeResponse, AnalyzeSection, AnalyzeItem } from '@/components/analyze/types'
import { VisionSection } from '@/components/analyze/VisionSection' import { VisionSection } from '@/components/analyze/VisionSection'
@ -178,7 +179,8 @@ export function AnalyzePanel() {
fastMode, fastMode,
setFastMode, setFastMode,
sectionVisibility, sectionVisibility,
setSectionVisibility setSectionVisibility,
dropStatus,
} = useAnalyzePanelStore() } = useAnalyzePanelStore()
const [loading, setLoading] = useState(false) const [loading, setLoading] = useState(false)
@ -277,6 +279,7 @@ export function AnalyzePanel() {
}, [data]) }, [data])
const headerDomain = data?.domain || domain || '' const headerDomain = data?.domain || domain || ''
const dropCountdown = useMemo(() => formatCountdown(dropStatus?.deletion_date ?? null), [dropStatus])
if (!isOpen) return null if (!isOpen) return null
@ -374,6 +377,63 @@ export function AnalyzePanel() {
</div> </div>
)} )}
{/* Drop Status Banner */}
{dropStatus && (
<div className="px-5 pb-3">
<div className={clsx(
"p-4 border flex items-center justify-between gap-4",
dropStatus.status === 'available' ? "border-accent/30 bg-accent/5" :
dropStatus.status === 'dropping_soon' ? "border-amber-400/30 bg-amber-400/5" :
dropStatus.status === 'taken' ? "border-rose-400/20 bg-rose-400/5" :
"border-white/10 bg-white/[0.02]"
)}>
<div className="flex items-center gap-3">
{dropStatus.status === 'available' ? (
<CheckCircle2 className="w-5 h-5 text-accent" />
) : dropStatus.status === 'dropping_soon' ? (
<Clock className="w-5 h-5 text-amber-400" />
) : dropStatus.status === 'taken' ? (
<XCircle className="w-5 h-5 text-rose-400" />
) : (
<Globe className="w-5 h-5 text-white/40" />
)}
<div>
<div className={clsx(
"text-sm font-bold uppercase tracking-wider",
dropStatus.status === 'available' ? "text-accent" :
dropStatus.status === 'dropping_soon' ? "text-amber-400" :
dropStatus.status === 'taken' ? "text-rose-400" :
"text-white/50"
)}>
{dropStatus.status === 'available' ? 'Available Now' :
dropStatus.status === 'dropping_soon' ? 'In Transition' :
dropStatus.status === 'taken' ? 'Re-registered' :
'Status Unknown'}
</div>
{dropStatus.status === 'dropping_soon' && dropStatus.deletion_date && (
<div className="text-xs font-mono text-amber-400/70">
{dropCountdown
? `Drops in ${dropCountdown}${parseIsoAsUtc(dropStatus.deletion_date).toLocaleDateString()}`
: `Drops: ${parseIsoAsUtc(dropStatus.deletion_date).toLocaleDateString()}`}
</div>
)}
</div>
</div>
{dropStatus.status === 'available' && domain && (
<a
href={`https://www.namecheap.com/domains/registration/results/?domain=${domain}`}
target="_blank"
rel="noopener noreferrer"
className="h-9 px-4 bg-accent text-black text-[10px] font-black uppercase tracking-widest flex items-center gap-1.5 hover:bg-white transition-all"
>
<Zap className="w-3 h-3" />
Buy Now
</a>
)}
</div>
</div>
)}
{/* Controls */} {/* Controls */}
<div className="px-5 pb-3 flex items-center gap-3"> <div className="px-5 pb-3 flex items-center gap-3">
<button <button

View File

@ -3,6 +3,7 @@
import { useState, useEffect, useCallback, useMemo } from 'react' import { useState, useEffect, useCallback, useMemo } from 'react'
import { api } from '@/lib/api' import { api } from '@/lib/api'
import { useAnalyzePanelStore } from '@/lib/analyze-store' import { useAnalyzePanelStore } from '@/lib/analyze-store'
import { formatCountdown } from '@/lib/time'
import { import {
Globe, Globe,
Loader2, Loader2,
@ -73,7 +74,20 @@ interface DropsTabProps {
} }
export function DropsTab({ showToast }: DropsTabProps) { export function DropsTab({ showToast }: DropsTabProps) {
const openAnalyze = useAnalyzePanelStore((s) => s.open) const openAnalyzePanel = useAnalyzePanelStore((s) => s.open)
// Wrapper to open analyze panel with drop status
const openAnalyze = useCallback((domain: string, item?: DroppedDomain) => {
if (item) {
openAnalyzePanel(domain, {
status: item.availability_status || 'unknown',
deletion_date: item.deletion_date,
is_drop: true,
})
} else {
openAnalyzePanel(domain)
}
}, [openAnalyzePanel])
// Data State // Data State
const [items, setItems] = useState<DroppedDomain[]>([]) const [items, setItems] = useState<DroppedDomain[]>([])
@ -104,6 +118,23 @@ export function DropsTab({ showToast }: DropsTabProps) {
// Status Checking // Status Checking
const [checkingStatus, setCheckingStatus] = useState<number | null>(null) const [checkingStatus, setCheckingStatus] = useState<number | null>(null)
const [trackingDrop, setTrackingDrop] = useState<number | null>(null) const [trackingDrop, setTrackingDrop] = useState<number | null>(null)
const [trackedDomains, setTrackedDomains] = useState<Set<string>>(new Set())
// Prefetch Watchlist domains (so Track button shows correct state)
useEffect(() => {
let cancelled = false
const loadTracked = async () => {
try {
const res = await api.getDomains(1, 200)
if (cancelled) return
setTrackedDomains(new Set(res.domains.map(d => d.name.toLowerCase())))
} catch {
// If unauthenticated, Drops list still renders; "Track" will prompt on action.
}
}
loadTracked()
return () => { cancelled = true }
}, [])
// Load Stats // Load Stats
const loadStats = useCallback(async () => { const loadStats = useCallback(async () => {
@ -192,42 +223,38 @@ export function DropsTab({ showToast }: DropsTabProps) {
} }
}, [checkingStatus, showToast]) }, [checkingStatus, showToast])
// Format countdown from deletion date
const formatCountdown = useCallback((deletionDate: string | null): string | null => {
if (!deletionDate) return null
const del = new Date(deletionDate)
const now = new Date()
const diff = del.getTime() - now.getTime()
if (diff <= 0) return 'Now'
const days = Math.floor(diff / (1000 * 60 * 60 * 24))
const hours = Math.floor((diff % (1000 * 60 * 60 * 24)) / (1000 * 60 * 60))
const mins = Math.floor((diff % (1000 * 60 * 60)) / (1000 * 60))
if (days > 0) return `${days}d ${hours}h`
if (hours > 0) return `${hours}h ${mins}m`
return `${mins}m`
}, [])
// Track a drop (add to watchlist) // Track a drop (add to watchlist)
const trackDrop = useCallback(async (dropId: number, domain: string) => { const trackDrop = useCallback(async (dropId: number, domain: string) => {
if (trackingDrop) return if (trackingDrop) return
if (trackedDomains.has(domain.toLowerCase())) {
showToast(`${domain} is already in your Watchlist`, 'info')
return
}
setTrackingDrop(dropId) setTrackingDrop(dropId)
try { try {
const result = await api.trackDrop(dropId) const result = await api.trackDrop(dropId)
// Mark as tracked regardless of status
setTrackedDomains(prev => {
const next = new Set(prev)
next.add(domain.toLowerCase())
return next
})
if (result.status === 'already_tracking') { if (result.status === 'already_tracking') {
showToast(`${domain} is already in your Watchlist`, 'info') showToast(`${domain} is already in your Watchlist`, 'info')
} else { } else {
showToast(result.message, 'success') showToast(result.message || `Added ${domain} to Watchlist!`, 'success')
} }
} catch (e) { } catch (e) {
showToast(e instanceof Error ? e.message : 'Failed to track', 'error') showToast(e instanceof Error ? e.message : 'Failed to track', 'error')
} finally { } finally {
setTrackingDrop(null) setTrackingDrop(null)
} }
}, [trackingDrop, showToast]) }, [trackingDrop, trackedDomains, showToast])
// Check if a drop is already tracked (domain-based, persists across sessions)
const isTracked = useCallback((fullDomain: string) => trackedDomains.has(fullDomain.toLowerCase()), [trackedDomains])
// Filtered and Sorted Items // Filtered and Sorted Items
const sortedItems = useMemo(() => { const sortedItems = useMemo(() => {
@ -557,14 +584,44 @@ export function DropsTab({ showToast }: DropsTabProps) {
const fullDomain = `${item.domain}.${item.tld}` const fullDomain = `${item.domain}.${item.tld}`
const isChecking = checkingStatus === item.id const isChecking = checkingStatus === item.id
const isTrackingThis = trackingDrop === item.id const isTrackingThis = trackingDrop === item.id
const alreadyTracked = isTracked(fullDomain)
const status = item.availability_status || 'unknown' const status = item.availability_status || 'unknown'
// Simplified status display config // Status display config with better labels
const countdown = item.deletion_date ? formatCountdown(item.deletion_date) : null
const statusConfig = { const statusConfig = {
available: { label: 'Available', color: 'text-accent', bg: 'bg-accent/10', border: 'border-accent/30', icon: CheckCircle2 }, available: {
dropping_soon: { label: 'Dropping Soon', color: 'text-amber-400', bg: 'bg-amber-400/10', border: 'border-amber-400/30', icon: Clock }, label: 'Available Now',
taken: { label: 'Taken', color: 'text-rose-400', bg: 'bg-rose-400/10', border: 'border-rose-400/30', icon: Ban }, color: 'text-accent',
unknown: { label: 'Check', color: 'text-white/50', bg: 'bg-white/5', border: 'border-white/20', icon: Search }, bg: 'bg-accent/10',
border: 'border-accent/30',
icon: CheckCircle2,
showBuy: true,
},
dropping_soon: {
label: countdown ? `In Transition • ${countdown}` : 'In Transition',
color: 'text-amber-400',
bg: 'bg-amber-400/10',
border: 'border-amber-400/30',
icon: Clock,
showBuy: false,
},
taken: {
label: 'Re-registered',
color: 'text-rose-400/60',
bg: 'bg-rose-400/5',
border: 'border-rose-400/20',
icon: Ban,
showBuy: false,
},
unknown: {
label: 'Check Status',
color: 'text-white/50',
bg: 'bg-white/5',
border: 'border-white/20',
icon: Search,
showBuy: false,
},
}[status] }[status]
const StatusIcon = statusConfig.icon const StatusIcon = statusConfig.icon
@ -576,7 +633,7 @@ export function DropsTab({ showToast }: DropsTabProps) {
<div className="flex items-start justify-between gap-4 mb-4"> <div className="flex items-start justify-between gap-4 mb-4">
<div className="min-w-0"> <div className="min-w-0">
<button <button
onClick={() => openAnalyze(fullDomain)} onClick={() => openAnalyze(fullDomain, item)}
className="text-lg font-bold text-white font-mono truncate block text-left hover:text-accent transition-colors" className="text-lg font-bold text-white font-mono truncate block text-left hover:text-accent transition-colors"
> >
{item.domain}<span className="text-white/30">.{item.tld}</span> {item.domain}<span className="text-white/30">.{item.tld}</span>
@ -594,28 +651,32 @@ export function DropsTab({ showToast }: DropsTabProps) {
onClick={() => checkStatus(item.id, fullDomain)} onClick={() => checkStatus(item.id, fullDomain)}
disabled={isChecking} disabled={isChecking}
className={clsx( className={clsx(
"text-[10px] font-mono font-bold px-2.5 py-1 border flex items-center gap-1", "text-[10px] font-mono font-bold px-2.5 py-1 border flex items-center gap-1.5",
statusConfig.color, statusConfig.bg, statusConfig.border statusConfig.color, statusConfig.bg, statusConfig.border
)} )}
> >
{isChecking ? <Loader2 className="w-3 h-3 animate-spin" /> : <StatusIcon className="w-3 h-3" />} {isChecking ? <Loader2 className="w-3 h-3 animate-spin" /> : <StatusIcon className="w-3 h-3" />}
{status === 'dropping_soon' && item.deletion_date {statusConfig.label}
? formatCountdown(item.deletion_date)
: statusConfig.label}
</button> </button>
</div> </div>
</div> </div>
</div> </div>
<div className="flex gap-2"> <div className="flex gap-2">
{/* Track Button - always visible */} {/* Track Button - shows "Tracked" if already in watchlist */}
<button <button
onClick={() => trackDrop(item.id, fullDomain)} onClick={() => trackDrop(item.id, fullDomain)}
disabled={isTrackingThis} disabled={isTrackingThis || alreadyTracked}
className="h-12 px-4 border border-white/10 text-white/60 text-xs font-bold uppercase tracking-widest flex items-center justify-center gap-2 hover:bg-white/5 active:scale-[0.98] transition-all" className={clsx(
"h-12 px-4 border text-xs font-bold uppercase tracking-widest flex items-center justify-center gap-2 transition-all",
alreadyTracked
? "border-accent/30 text-accent bg-accent/5 cursor-default"
: "border-white/10 text-white/60 hover:bg-white/5 active:scale-[0.98]"
)}
> >
{isTrackingThis ? <Loader2 className="w-4 h-4 animate-spin" /> : <Eye className="w-4 h-4" />} {isTrackingThis ? <Loader2 className="w-4 h-4 animate-spin" /> :
Track alreadyTracked ? <CheckCircle2 className="w-4 h-4" /> : <Eye className="w-4 h-4" />}
{alreadyTracked ? 'Tracked' : 'Track'}
</button> </button>
{/* Action Button based on status */} {/* Action Button based on status */}
@ -630,10 +691,15 @@ export function DropsTab({ showToast }: DropsTabProps) {
Buy Now Buy Now
</a> </a>
) : status === 'dropping_soon' ? ( ) : status === 'dropping_soon' ? (
<span className="flex-1 h-12 border border-amber-400/30 text-amber-400 bg-amber-400/5 text-xs font-bold uppercase tracking-widest flex items-center justify-center gap-2"> <div className="flex-1 h-12 border border-amber-400/30 text-amber-400 bg-amber-400/5 text-xs font-bold uppercase tracking-widest flex flex-col items-center justify-center">
<Clock className="w-4 h-4" /> <span className="flex items-center gap-1.5">
Dropping Soon <Clock className="w-3 h-3" />
</span> In Transition
</span>
{countdown && (
<span className="text-[9px] text-amber-400/70 font-mono">{countdown} until drop</span>
)}
</div>
) : status === 'taken' ? ( ) : status === 'taken' ? (
<span className="flex-1 h-12 border border-rose-400/20 text-rose-400/60 text-xs font-bold uppercase tracking-widest flex items-center justify-center gap-2 bg-rose-400/5"> <span className="flex-1 h-12 border border-rose-400/20 text-rose-400/60 text-xs font-bold uppercase tracking-widest flex items-center justify-center gap-2 bg-rose-400/5">
<Ban className="w-4 h-4" /> <Ban className="w-4 h-4" />
@ -650,7 +716,7 @@ export function DropsTab({ showToast }: DropsTabProps) {
</button> </button>
)} )}
<button <button
onClick={() => openAnalyze(fullDomain)} onClick={() => openAnalyze(fullDomain, item)}
className="w-14 h-12 border border-white/10 text-white/50 flex items-center justify-center hover:text-accent hover:border-accent/30 hover:bg-accent/5 transition-all" className="w-14 h-12 border border-white/10 text-white/50 flex items-center justify-center hover:text-accent hover:border-accent/30 hover:bg-accent/5 transition-all"
> >
<Shield className="w-5 h-5" /> <Shield className="w-5 h-5" />
@ -663,7 +729,7 @@ export function DropsTab({ showToast }: DropsTabProps) {
{/* Domain */} {/* Domain */}
<div className="min-w-0"> <div className="min-w-0">
<button <button
onClick={() => openAnalyze(fullDomain)} onClick={() => openAnalyze(fullDomain, item)}
className="text-sm font-bold text-white font-mono truncate group-hover:text-accent transition-colors text-left block" className="text-sm font-bold text-white font-mono truncate group-hover:text-accent transition-colors text-left block"
> >
{item.domain}<span className="text-white/30 group-hover:text-accent/40">.{item.tld}</span> {item.domain}<span className="text-white/30 group-hover:text-accent/40">.{item.tld}</span>
@ -688,15 +754,13 @@ export function DropsTab({ showToast }: DropsTabProps) {
onClick={() => checkStatus(item.id, fullDomain)} onClick={() => checkStatus(item.id, fullDomain)}
disabled={isChecking} disabled={isChecking}
className={clsx( className={clsx(
"text-[10px] font-mono font-bold px-2.5 py-1 border inline-flex items-center gap-1.5 transition-all hover:opacity-80", "text-[10px] font-mono font-bold px-2.5 py-1.5 border inline-flex items-center gap-1.5 transition-all hover:opacity-80",
statusConfig.color, statusConfig.bg, statusConfig.border statusConfig.color, statusConfig.bg, statusConfig.border
)} )}
title="Click to check real-time status" title="Click to check real-time status"
> >
{isChecking ? <Loader2 className="w-3 h-3 animate-spin" /> : <StatusIcon className="w-3 h-3" />} {isChecking ? <Loader2 className="w-3 h-3 animate-spin" /> : <StatusIcon className="w-3 h-3" />}
{status === 'dropping_soon' && item.deletion_date <span className="max-w-[100px] truncate">{statusConfig.label}</span>
? formatCountdown(item.deletion_date)
: statusConfig.label}
</button> </button>
</div> </div>
@ -709,17 +773,23 @@ export function DropsTab({ showToast }: DropsTabProps) {
{/* Actions */} {/* Actions */}
<div className="flex items-center justify-end gap-2 opacity-60 group-hover:opacity-100 transition-all"> <div className="flex items-center justify-end gap-2 opacity-60 group-hover:opacity-100 transition-all">
{/* Track Button - always visible */} {/* Track Button - shows checkmark if tracked */}
<button <button
onClick={() => trackDrop(item.id, fullDomain)} onClick={() => trackDrop(item.id, fullDomain)}
disabled={isTrackingThis} disabled={isTrackingThis || alreadyTracked}
className="w-9 h-9 flex items-center justify-center border border-white/10 text-white/50 hover:text-white hover:bg-white/5 transition-all" className={clsx(
title="Add to Watchlist" "w-9 h-9 flex items-center justify-center border transition-all",
alreadyTracked
? "border-accent/30 text-accent bg-accent/5 cursor-default"
: "border-white/10 text-white/50 hover:text-white hover:bg-white/5"
)}
title={alreadyTracked ? "Already in Watchlist" : "Add to Watchlist"}
> >
{isTrackingThis ? <Loader2 className="w-3.5 h-3.5 animate-spin" /> : <Eye className="w-3.5 h-3.5" />} {isTrackingThis ? <Loader2 className="w-3.5 h-3.5 animate-spin" /> :
alreadyTracked ? <CheckCircle2 className="w-3.5 h-3.5" /> : <Eye className="w-3.5 h-3.5" />}
</button> </button>
<button <button
onClick={() => openAnalyze(fullDomain)} onClick={() => openAnalyze(fullDomain, item)}
className="w-9 h-9 flex items-center justify-center border border-white/10 text-white/50 hover:text-accent hover:border-accent/30 hover:bg-accent/5 transition-all" className="w-9 h-9 flex items-center justify-center border border-white/10 text-white/50 hover:text-accent hover:border-accent/30 hover:bg-accent/5 transition-all"
title="Analyze Domain" title="Analyze Domain"
> >
@ -736,13 +806,25 @@ export function DropsTab({ showToast }: DropsTabProps) {
title="Register this domain now!" title="Register this domain now!"
> >
<Zap className="w-3 h-3" /> <Zap className="w-3 h-3" />
Buy Now Buy
</a> </a>
) : status === 'dropping_soon' ? ( ) : status === 'dropping_soon' ? (
<span className="h-9 px-3 text-amber-400 text-[10px] font-bold uppercase tracking-widest flex items-center gap-1.5 border border-amber-400/30 bg-amber-400/5"> alreadyTracked ? (
<Clock className="w-3 h-3" /> <span className="h-9 px-3 text-accent text-[10px] font-bold uppercase tracking-widest flex items-center gap-1.5 border border-accent/30 bg-accent/5">
Soon <CheckCircle2 className="w-3 h-3" />
</span> Tracked
</span>
) : (
<button
onClick={() => trackDrop(item.id, fullDomain)}
disabled={isTrackingThis}
className="h-9 px-3 text-amber-400 text-[10px] font-bold uppercase tracking-widest flex items-center gap-1.5 border border-amber-400/30 bg-amber-400/5 hover:bg-amber-400/10 transition-all"
title={countdown ? `Drops in ${countdown} - Track to get notified!` : 'Track to get notified when available'}
>
{isTrackingThis ? <Loader2 className="w-3 h-3 animate-spin" /> : <Eye className="w-3 h-3" />}
Track
</button>
)
) : status === 'taken' ? ( ) : status === 'taken' ? (
<span className="h-9 px-3 text-rose-400/50 text-[10px] font-bold uppercase tracking-widest flex items-center gap-1.5 border border-rose-400/20 bg-rose-400/5"> <span className="h-9 px-3 text-rose-400/50 text-[10px] font-bold uppercase tracking-widest flex items-center gap-1.5 border border-rose-400/20 bg-rose-400/5">
<Ban className="w-3 h-3" /> <Ban className="w-3 h-3" />

View File

@ -2,17 +2,25 @@ import { create } from 'zustand'
export type AnalyzeSectionVisibility = Record<string, boolean> export type AnalyzeSectionVisibility = Record<string, boolean>
export type DropStatusInfo = {
status: 'available' | 'dropping_soon' | 'taken' | 'unknown'
deletion_date?: string | null
is_drop?: boolean
}
export type AnalyzePanelState = { export type AnalyzePanelState = {
isOpen: boolean isOpen: boolean
domain: string | null domain: string | null
fastMode: boolean fastMode: boolean
filterText: string filterText: string
sectionVisibility: AnalyzeSectionVisibility sectionVisibility: AnalyzeSectionVisibility
open: (domain: string) => void dropStatus: DropStatusInfo | null
open: (domain: string, dropStatus?: DropStatusInfo) => void
close: () => void close: () => void
setFastMode: (fast: boolean) => void setFastMode: (fast: boolean) => void
setFilterText: (value: string) => void setFilterText: (value: string) => void
setSectionVisibility: (next: AnalyzeSectionVisibility) => void setSectionVisibility: (next: AnalyzeSectionVisibility) => void
setDropStatus: (status: DropStatusInfo | null) => void
} }
const DEFAULT_VISIBILITY: AnalyzeSectionVisibility = { const DEFAULT_VISIBILITY: AnalyzeSectionVisibility = {
@ -28,11 +36,13 @@ export const useAnalyzePanelStore = create<AnalyzePanelState>((set) => ({
fastMode: false, fastMode: false,
filterText: '', filterText: '',
sectionVisibility: DEFAULT_VISIBILITY, sectionVisibility: DEFAULT_VISIBILITY,
open: (domain) => set({ isOpen: true, domain, filterText: '' }), dropStatus: null,
close: () => set({ isOpen: false }), open: (domain, dropStatus) => set({ isOpen: true, domain, filterText: '', dropStatus: dropStatus || null }),
close: () => set({ isOpen: false, dropStatus: null }),
setFastMode: (fastMode) => set({ fastMode }), setFastMode: (fastMode) => set({ fastMode }),
setFilterText: (filterText) => set({ filterText }), setFilterText: (filterText) => set({ filterText }),
setSectionVisibility: (sectionVisibility) => set({ sectionVisibility }), setSectionVisibility: (sectionVisibility) => set({ sectionVisibility }),
setDropStatus: (dropStatus) => set({ dropStatus }),
})) }))
export const ANALYZE_PREFS_KEY = 'pounce_analyze_prefs_v1' export const ANALYZE_PREFS_KEY = 'pounce_analyze_prefs_v1'

View File

@ -486,9 +486,12 @@ class ApiClient {
is_available: boolean is_available: boolean
registrar: string | null registrar: string | null
expiration_date: string | null expiration_date: string | null
deletion_date?: string | null
notify_on_available: boolean notify_on_available: boolean
created_at: string created_at: string
last_checked: string | null last_checked: string | null
status_checked_at?: string | null
status_source?: string | null
}> }>
total: number total: number
page: number page: number

View File

@ -19,9 +19,12 @@ interface Domain {
is_available: boolean is_available: boolean
registrar: string | null registrar: string | null
expiration_date: string | null expiration_date: string | null
deletion_date?: string | null
notify_on_available: boolean notify_on_available: boolean
created_at: string created_at: string
last_checked: string | null last_checked: string | null
status_checked_at?: string | null
status_source?: string | null
} }
interface Subscription { interface Subscription {

35
frontend/src/lib/time.ts Normal file
View File

@ -0,0 +1,35 @@
export function parseIsoAsUtc(value: string): Date {
// If the string already contains timezone info, keep it.
// Otherwise treat it as UTC (backend persists naive UTC timestamps).
const hasTimezone = /([zZ]|[+-]\d{2}:\d{2})$/.test(value)
return new Date(hasTimezone ? value : `${value}Z`)
}
export function formatCountdown(iso: string | null): string | null {
if (!iso) return null
const target = parseIsoAsUtc(iso)
const now = new Date()
const diff = target.getTime() - now.getTime()
if (Number.isNaN(diff)) return null
if (diff <= 0) return 'Now'
const days = Math.floor(diff / (1000 * 60 * 60 * 24))
const hours = Math.floor((diff % (1000 * 60 * 60 * 24)) / (1000 * 60 * 60))
const mins = Math.floor((diff % (1000 * 60 * 60)) / (1000 * 60))
if (days > 0) return `${days}d ${hours}h`
if (hours > 0) return `${hours}h ${mins}m`
return `${mins}m`
}
export function daysUntil(iso: string | null): number | null {
if (!iso) return null
const target = parseIsoAsUtc(iso)
const now = new Date()
const diff = target.getTime() - now.getTime()
if (Number.isNaN(diff)) return null
return Math.ceil(diff / (1000 * 60 * 60 * 24))
}

51
ops/CI_CD.md Normal file
View File

@ -0,0 +1,51 @@
# CI/CD (Gitea Actions) Auto Deploy
## Goal
Every push to `main` should:
- sync the repo to the production server
- build Docker images on the server
- restart containers
- run health checks
This repository uses a **remote SSH deployment** from Gitea Actions.
## Required Gitea Actions Secrets
Configure these in Gitea: **Repo → Settings → Actions → Secrets**
### Deployment (SSH)
- `DEPLOY_HOST` production server IP/hostname
- `DEPLOY_USER` SSH user (e.g. `administrator`)
- `DEPLOY_PATH` absolute path where the repo is synced on the server (e.g. `/home/administrator/pounce`)
- `DEPLOY_SSH_KEY` private key for SSH access
- `DEPLOY_SUDO_PASSWORD` sudo password for `DEPLOY_USER` (used non-interactively)
### App Secrets (Backend)
Used to generate `/data/pounce/env/backend.env` on the server.
- `DATABASE_URL`
- `SECRET_KEY`
- `SMTP_PASSWORD`
- `STRIPE_SECRET_KEY`
- `STRIPE_WEBHOOK_SECRET`
- `GOOGLE_CLIENT_SECRET`
- `GH_OAUTH_SECRET`
- `CZDS_USERNAME`
- `CZDS_PASSWORD`
## Server Requirements
- `sudo` installed
- `docker` installed
- `DEPLOY_USER` must be able to run docker via `sudo` (pipeline uses `sudo -S docker ...`)
## Notes
- Secrets are written to `/data/pounce/env/backend.env` on the server with restricted permissions.
- Frontend build args are supplied in the workflow (`NEXT_PUBLIC_API_URL`, `BACKEND_URL`).
## Trigger
This file change triggers CI.
- runner dns fix validation
- redeploy after runner fix
- runner re-register

168
scripts/deploy.sh Executable file
View File

@ -0,0 +1,168 @@
#!/bin/bash
#
# POUNCE DEPLOYMENT SCRIPT
# ========================
# Run this locally to deploy to production
#
# Usage:
# ./scripts/deploy.sh # Deploy both frontend and backend
# ./scripts/deploy.sh backend # Deploy backend only
# ./scripts/deploy.sh frontend # Deploy frontend only
#
set -e
# Configuration
SERVER="185.142.213.170"
SSH_KEY="${SSH_KEY:-$HOME/.ssh/pounce_server}"
SSH_USER="administrator"
REMOTE_TMP="/tmp/pounce"
REMOTE_REPO="/home/administrator/pounce"
REMOTE_ENV_DIR="/data/pounce/env"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
log() { echo -e "${GREEN}[DEPLOY]${NC} $1"; }
warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
error() { echo -e "${RED}[ERROR]${NC} $1"; exit 1; }
# Check SSH key
if [ ! -f "$SSH_KEY" ]; then
error "SSH key not found: $SSH_KEY"
fi
if [ -z "${DEPLOY_SUDO_PASSWORD:-}" ]; then
error "DEPLOY_SUDO_PASSWORD is required (export it locally, do not commit it)."
fi
# What to deploy
DEPLOY_BACKEND=true
DEPLOY_FRONTEND=true
if [ "$1" = "backend" ]; then
DEPLOY_FRONTEND=false
log "Deploying backend only"
elif [ "$1" = "frontend" ]; then
DEPLOY_BACKEND=false
log "Deploying frontend only"
else
log "Deploying both frontend and backend"
fi
# Sync and build backend
if [ "$DEPLOY_BACKEND" = true ]; then
log "Syncing backend code..."
rsync -avz --delete \
-e "ssh -i $SSH_KEY -o StrictHostKeyChecking=no" \
--exclude '__pycache__' \
--exclude '*.pyc' \
--exclude '.git' \
--exclude 'venv' \
backend/ \
${SSH_USER}@${SERVER}:${REMOTE_REPO}/backend/
log "Building backend image..."
ssh -i "$SSH_KEY" -o StrictHostKeyChecking=no ${SSH_USER}@${SERVER} \
"printf '%s\n' \"${DEPLOY_SUDO_PASSWORD}\" | sudo -S docker build -t pounce-backend:latest ${REMOTE_REPO}/backend/" || error "Backend build failed"
log "Deploying backend container..."
ssh -i "$SSH_KEY" -o StrictHostKeyChecking=no ${SSH_USER}@${SERVER} << BACKEND_DEPLOY
printf '%s\n' "${DEPLOY_SUDO_PASSWORD}" | sudo -S bash -c '
set -e
mkdir -p "${REMOTE_ENV_DIR}" /data/pounce/zones
chmod -R 755 /data/pounce || true
# Backend env must exist on server (created by CI or manually)
if [ ! -f "${REMOTE_ENV_DIR}/backend.env" ]; then
echo "Missing ${REMOTE_ENV_DIR}/backend.env"
exit 1
fi
docker stop pounce-backend 2>/dev/null || true
docker rm pounce-backend 2>/dev/null || true
docker run -d \
--name pounce-backend \
--network coolify \
--shm-size=8g \
--env-file "${REMOTE_ENV_DIR}/backend.env" \
-v /data/pounce/zones:/data \
--label "traefik.enable=true" \
--label "traefik.http.routers.pounce-backend.rule=Host(\`api.pounce.ch\`)" \
--label "traefik.http.routers.pounce-backend.entrypoints=https" \
--label "traefik.http.routers.pounce-backend.tls=true" \
--label "traefik.http.routers.pounce-backend.tls.certresolver=letsencrypt" \
--label "traefik.http.services.pounce-backend.loadbalancer.server.port=8000" \
--health-cmd "curl -f http://localhost:8000/health || exit 1" \
--health-interval 30s \
--restart unless-stopped \
pounce-backend:latest
docker network connect n0488s44osgoow4wgo04ogg0 pounce-backend 2>/dev/null || true
echo "✅ Backend deployed"
'
BACKEND_DEPLOY
fi
# Sync and build frontend
if [ "$DEPLOY_FRONTEND" = true ]; then
log "Syncing frontend code..."
rsync -avz --delete \
-e "ssh -i $SSH_KEY -o StrictHostKeyChecking=no" \
--exclude 'node_modules' \
--exclude '.next' \
--exclude '.git' \
frontend/ \
${SSH_USER}@${SERVER}:${REMOTE_REPO}/frontend/
log "Building frontend image..."
ssh -i "$SSH_KEY" -o StrictHostKeyChecking=no ${SSH_USER}@${SERVER} \
"printf '%s\n' \"${DEPLOY_SUDO_PASSWORD}\" | sudo -S docker build --build-arg NEXT_PUBLIC_API_URL=https://api.pounce.ch --build-arg BACKEND_URL=http://pounce-backend:8000 -t pounce-frontend:latest ${REMOTE_REPO}/frontend/" || error "Frontend build failed"
log "Deploying frontend container..."
ssh -i "$SSH_KEY" -o StrictHostKeyChecking=no ${SSH_USER}@${SERVER} << FRONTEND_DEPLOY
printf '%s\n' "${DEPLOY_SUDO_PASSWORD}" | sudo -S bash -c '
set -e
docker stop pounce-frontend 2>/dev/null || true
docker rm pounce-frontend 2>/dev/null || true
docker run -d \
--name pounce-frontend \
--network coolify \
--restart unless-stopped \
--label "traefik.enable=true" \
--label "traefik.http.routers.pounce-web.rule=Host(\`pounce.ch\`) || Host(\`www.pounce.ch\`)" \
--label "traefik.http.routers.pounce-web.entryPoints=https" \
--label "traefik.http.routers.pounce-web.tls=true" \
--label "traefik.http.routers.pounce-web.tls.certresolver=letsencrypt" \
--label "traefik.http.services.pounce-web.loadbalancer.server.port=3000" \
pounce-frontend:latest
docker network connect n0488s44osgoow4wgo04ogg0 pounce-frontend 2>/dev/null || true
echo "✅ Frontend deployed"
'
FRONTEND_DEPLOY
fi
# Health check
log "Running health check..."
sleep 15
curl -sf https://api.pounce.ch/api/v1/health && echo "" && log "Backend: ✅ Healthy"
curl -sf https://pounce.ch -o /dev/null && log "Frontend: ✅ Healthy"
# Cleanup
log "Cleaning up..."
ssh -i "$SSH_KEY" -o StrictHostKeyChecking=no ${SSH_USER}@${SERVER} \
"printf '%s\n' \"${DEPLOY_SUDO_PASSWORD}\" | sudo -S docker image prune -f" > /dev/null 2>&1
log "=========================================="
log "🎉 DEPLOYMENT SUCCESSFUL!"
log "=========================================="
log "Frontend: https://pounce.ch"
log "Backend: https://api.pounce.ch"
log "=========================================="