PROJECT DRACULA: MASTER LOGS

Operating System: Raspberry Pi 5 | Infrastructure: 14TB UGREEN Vault

WEEK 1: THE ISP LOCKOUT & THE "BOSS" HIERARCHY

The first week was dedicated to fighting against the ISP's locked hardware in order to establish a secure internal perimeter.

The Root Cause: The T-Mobile gateway is a "black box". Its firmware completely blocks the user from advanced configurations. The network was blind and hostile to hosting servers.

Initial Failure (The AP Failure): We attempted to use T-Mobile as the main router and TP-Link only as an Access Point (AP). It failed completely. T-Mobile's strict NAT rules caused IP conflicts, making the laptops (Dracu and Dracula) invisible to each other.

The Engineering Solution: We abandoned negotiations with T-Mobile's firmware. We demoted it to a simple Layer 2 "Pipe". To regain full control, we pulled the TP-Link out of AP mode and promoted it to Access Router (AR / Router Boss).

Final Result:
TP-Link commands the 192.168.1.x subnet, managing the SPI firewall and static leases.

Current Topology:
- Gateway/Boss: 192.168.1.1
- Dracula (Pi 5): 192.168.1.10
- Samsung: 192.168.1.200
Mesh visibility: 100%

WEEK 2: THE 443GB BUG & THE STORAGE HORIZON

With the network unified, the second week exposed a critical storage limitation — the 14TB vault was being reported as only 443GB.

The Root Cause: The default SMB mount used by Linux was not compatible with the UGREEN NAS inode system. Without the noserverino flag, the kernel was hallucinating a 443GB ceiling instead of the true 13.5TB capacity.

Failed Variant — Standard SMB: Default CIFS mount reported 443GB. All file operations were capped. The vault was effectively unusable for large backups.

Failed Variant — NFS: Attempted NFS as an alternative. Failed due to UID/GID permission mismatch between the Linux Pi and the UGOS NAS internal user mapping.

The Engineering Solution: Upgraded to SMB v3.0 with noserverino, cache=none, and tuned rsize/wsize=131072 in /etc/fstab. The full 13.5TB horizon was unlocked instantly.

Final Result:
All 3 NAS shares mounted and stable.

Mount Config:
- /mnt/vault (14TB main)
- /mnt/test_vault
- /mnt/ugreen (Photos)

Flags: noserverino, vers=3.0,
nofail, _netdev
Capacity verified: 13.5TB

WEEK 3: DOCKER STABILITY & THE DATABASE MIGRATION

With the vault accessible, the third week focused on standing up the service stack and eliminating recurring crashes in the database layer.

The Root Cause: MariaDB was installed on the Pi's MicroSD card. The SD card's high I/O latency under concurrent Docker load caused repeated segmentation faults — Exit Code 139 — crashing Nextcloud entirely.

Failed Variant — SD Card DB: MariaDB on mmcblk0 — Exit 139 crash loop. Nextcloud inaccessible within minutes of startup.

Failed Variant — 4TB Toshiba DB: Partial improvement in stability, but USB 3.0 throughput bottleneck limited performance. Not viable long-term.

The Engineering Solution: Migrated all Docker volumes — Nextcloud, MariaDB, Uptime Kuma — to the 14TB NAS Vault. Eliminated I/O bottleneck entirely. 100% uptime achieved.

Final Result:
All services running on NAS-backed persistence.

Stack:
- Nextcloud + MariaDB 10.11
- Pi-hole (DNS)
- Uptime Kuma (monitoring)
- Cloudflare Tunnel

Files indexed: 119,274
Uptime: 100% verified

WEEK 4: NATIVE AI INTEGRATION & ZERO-TRUST INGRESS

Phase 4 required automating data classification and establishing secure remote telemetry to the Command Center without compromising the internal perimeter.

The Threat Model: Containerized AI deployment caused volume binding conflicts. Critically, exposing the dashboard via traditional NAT port-forwarding on the TP-Link AR was rejected. This would expand the attack surface, exposing the 192.168.1.x subnet to automated WAN scanners and unauthenticated ingress vectors.

Security Remediation: We bypassed container limits by installing Claude Code CLI natively for direct POSIX filesystem access. For secure external visibility, we deployed a Cloudflare Zero Trust Tunnel (cloudflared daemon), establishing a persistent, encrypted outbound connection requiring zero open ingress ports on the edge.

Data Operations:
Regex-based data classification successful.
Unstructured data routed to /Documents/AI_Certifications.

Network Security Posture:
Zero Trust Overlay Network Active.
Inbound Edge Ports: 0 (Strict Block).
Nginx (1.10) securely bridged via reverse tunnel.
Status: ATTACK SURFACE MINIMIZED.

WEEK 5: LEGACY MIGRATION & CROSS-DEVICE SYNC

Phase 5 focused on the decommissioning of the 4TB Toshiba legacy drive and the synchronization of all mobile and laptop data into the 14TB Vault.

The Conflict: Moving decades of unstructured data from the Toshiba drive while maintaining real-time sync for the Samsung phone (1.200) and Dracu laptop (1.25). The TP-Link Router Boss had to manage high-bandwidth internal NAT traffic to prevent I/O timeouts during the massive data ingestion.

The Engineering Fix: Leveraged the TP-Link's NAT table for optimized local routing between devices. Initiated a multi-vector sync via Nextcloud, moving the 4TB Toshiba "swamp" into the organized NAS "Vault." All internal traffic is now routed with priority for the 1.10 (Dracula) server nodes.

Sync Operations:
Toshiba 4TB → UGREEN 14TB (In Progress).
Mobile/Laptop Sync: Active via Nextcloud.

Network Status:
NAT Translation: Optimized for local 192.168.1.x traffic.
DHCP Reservations: Dracula (1.10), Dracu (1.25), Samsung (1.200).
Status: DATA CONSOLIDATION ACTIVE.