Add DomainWithCheckStatus model and GetWorstDomainStatuses usecase to
compute the most critical checker status per domain. The GET /domains
endpoint now returns status alongside each domain. The frontend domain
store, list components, and table row display dynamic status badges
with color and icon instead of a hardcoded "OK".
ZoneList is made generic (T extends HappydnsDomain) so the badges
snippet preserves the caller's concrete type without unsafe casts.
Wire up the checker system to the HTTP layer:
- API controllers for checker operations, options, plans, and results
- Scoped routes at domain and service level
- Admin controllers for checker config and scheduler management
- App initialization: create usecases, start/stop scheduler
- Zone controller updated to include per-service check status
Implement the checker business logic:
- CheckerOptionsUsecase: scope-based option resolution, validation,
auto-fill from execution context (domain, zone, service)
- CheckPlanUsecase: CRUD for user scheduling configurations
- CheckStatusUsecase: aggregated status queries, execution history
- CheckerEngine: full execution pipeline (observe, evaluate, aggregate)
- Scheduler: background job executor with auto-discovery, min-heap
queue, worker pool, and jitter-based scheduling
Add the persistence layer for the checker system:
- Storage interfaces (CheckPlanStorage, CheckerOptionsStorage,
CheckEvaluationStorage, ExecutionStorage, ObservationSnapshotStorage,
SchedulerStateStorage) in the usecase/checker package
- KV-based implementations for LevelDB/Oracle NoSQL/InMemory backends
- Integrate checker storage into the main Storage interface
- Add tidy methods for checker entities (plans, configurations,
evaluations, executions, snapshots, observation cache) and
secondary index cleanup
Scan -plugins-directory paths at startup, open each .so via plugin.Open,
look up the NewCheckerPlugin symbol from checker-sdk-go, and register the
returned definition and observation provider in the global checker
registries. A pluginLoader indirection keeps the door open for future
plugin kinds.
Switch ServiceMeta.Domain, ServiceMeta.Ttl, ServiceMeta.NbResources,
and ZoneMeta.DefaultTTL from binding:"required" to validate:"required".
This keeps them marked as required in the OpenAPI spec (swaggo reads
both tags) without gin rejecting valid zero values (0 for uint32,
"" for root domain).
CreateDomain now takes a DomainCreationInput and returns the created
Domain, so the controller no longer needs to construct an intermediate
Domain struct and the response includes the server-assigned fields.
Fixes: https://github.com/happyDomain/happydomain/issues/44
Introduce dedicated input types (DomainUpdateInput, SessionInput) to limit
writable fields on update endpoints, and add binding:required tags across
model structs to improve validation and Swagger documentation accuracy.
Propagate context.Context as first parameter through all provider and
domain usecase interface methods that didn't already have it. This is
a prerequisite for the upcoming secret management layer, which needs
request-scoped context to carry session-derived encryption keys.
After publishing zone corrections, compute and store a PropagatedAt
timestamp on each affected service indicating when old cached records
will have expired. For updated/deleted services, this is publish_time +
old service TTL. For new services, it uses the SOA minimum TTL
(negative cache duration), falling back to the zone's DefaultTTL.
The propagation detection reuses the same service matching technique as
ReassociateMetadata (subdomain + type + ServiceRDataHash). Both the
published snapshot and the WIP zone are stamped.
Decouple diff computation from executable provider closures by fetching
provider records and computing diffs locally via DNSControlDiffByRecord.
On apply, build a target record set from user-selected corrections using
BuildTargetRecords, then ask the provider for executable corrections
against that target. A published snapshot is inserted at ZoneHistory[1]
while the WIP zone at position 0 remains unchanged.
After AnalyzeZone rebuilds services from raw DNS records, metadata that
cannot be derived from DNS (Id, UserComment, OwnerId, Aliases, TTL, and
service-specific fields like OpenPGP/SMimeCert Username) was lost.
Add a post-processing function ReassociateMetadata that matches new
services to old ones by type and subdomain (using RDATA hashing for
disambiguation) and transfers metadata. Services opt in to body-level
transfer via the new MetadataEnricher interface.
Restructure the service analyzer architecture to improve maintainability:
- Extract recordPool (zone records + mark-delete claiming) and
serviceAccumulator (service registry + domain normalization) as
embedded structs in Analyzer
- Replace swap-delete with mark-delete to eliminate mutation-during-iteration
- Centralize domain normalization using helpers.DomainRelative
- Make Comment/NbResources lazy via Service.MarshalJSON instead of
eager assignment at three separate call sites
- Extract SPF merging from usecase layer into services.CollectAndMergeSPF
- Add GetDefaultTTL accessor and comprehensive Analyzer doc comments
- Add round-trip test infrastructure covering MX, CNAME, CAA, TXT, SPF,
DMARC, GSuite, Origin, Server and more
RFC 7208 requires exactly one SPF record per domain. Previously, the
standalone SPF service and provider services like GSuite each emitted
their own SPF TXT record, producing invalid DNS when both existed.
Introduce SPFContributor interface so services can declare SPF
directives independently. At zone generation time, all contributions
for the same domain are merged into a single SPF record with the
strictest "all" policy winning. During zone import, GSuite claims its
directive via ClaimSPFDirective so the SPF analyzer excludes it from
the standalone SPF service.
Previously, RecordFailure/RecordSuccess were only called when a captcha
provider was configured, making brute-force tracking entirely inactive
on deployments without one.
- Always track login failures and successes regardless of captcha config
- When threshold is crossed with a captcha provider: 401 + captcha_required (existing behaviour)
- When threshold is crossed without a captcha provider: 429 + rate_limited flag
- Frontend: show a rate-limited message and disable the submit button on 429
- Add errors.rate-limited translation key to all locales
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Add a bcryptCost constant to centralize the target cost (12), a
NeedsRehash() method that checks the stored hash cost via bcrypt.Cost(),
and trigger a transparent rehash in AuthenticateUserWithPassword when
the stored hash is below the current target.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
OWASP now recommends bcrypt cost >= 12. Using the implicit default cost
of 0 (which maps to 10) is below current recommendations.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
bcrypt silently truncates input at 72 bytes. Without an explicit maximum,
a user could set a 200-char password and log in with only the first 72
chars, and very long passwords could be used for a CPU-based DoS.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Replace the UsecaseDependancies interface with plain Dependencies structs
defined locally in each route package. DeclareRoutes acts as the sole
composition root where use cases are resolved; sub-route functions and
controllers receive only the specific interfaces they need.
Integrates optional bot protection on the registration endpoint (always
required when a provider is configured) and the login endpoint (triggered
after N consecutive failures for the same IP or email address).
Supported providers: hCaptcha, reCAPTCHA v2, Cloudflare Turnstile.
Implement backend-driven service initialization to replace hazardous
frontend initialization logic. Services can now provide custom
initialization via ServiceInitializer interface or get sensible
defaults automatically.
Extract common key-value storage operations into internal/storage/kvtpl/
template directory. This consolidates duplicated logic from both leveldb
and inmemory implementations into reusable generic functions.
Key changes:
- Create kvtpl/ package with generic KV storage operations
- Consolidate iterator implementation using generics
- Update migration functions to use new template structure
- Refactor tests to work with new storage interface
This prepares the codebase for easier addition of new storage backends
by providing a consistent template layer.