Propagate context.Context as first parameter through all provider and
domain usecase interface methods that didn't already have it. This is
a prerequisite for the upcoming secret management layer, which needs
request-scoped context to carry session-derived encryption keys.
Replace sha256.Sum224 (SHA-224 algorithm) with SHA-256 truncated to 28
bytes, matching what RFCs 7929 and 8162 actually specify and what the
frontend was already computing correctly. Also wrap GetRecords errors
with the service type name for easier diagnosis.
Fix panic on DS records by importing dnscontrol rtype package
For record types registered in rtypecontrol.Func, bypass dnsrr.RRtoRC and
call rtypecontrol.NewRecordConfigFromStruct directly, passing the record as
the struct argument. This ensures correct handling of modern rtypes (DS, RP,
etc.) without relying on dnsrr's internal dispatch. Legacy types fall back to
the existing dnsrr.RRtoRC path.
Import _ "github.com/StackExchange/dnscontrol/v4/pkg/rtype" to trigger
init() registration of DS and RP as modern types. Without this import,
dnsrr.RRtoRC would panic with "DS should be handled as modern type".
The ProviderMessage does not contain necesseraly all
metadata. Moreover some metadata shouldn't be updated (Owner, Id,
...). So keep the modifications to the minimum.
Introduce a new adapter layer that allows happyDomain to use providers
from the libdns ecosystem alongside the existing dnscontrol providers.
The adapter implements ProviderActuator by converting between miekg/dns
and libdns record formats, reusing the existing DNSControl diff engine
for computing corrections, and generating executable correction functions
that call libdns Append/Delete/Set methods.
For providers that manage SOA serials (like AXFRDDNS), re-fetch the
zone after applying corrections to capture the real published state.
The snapshot and WIP zone's Origin service are both updated with the
actual serial.
This is gated on a new "manages-soa-serial" capability, preserving
the existing behavior for providers that abstract SOA handling.
Closes: https://github.com/happyDomain/happydomain/issues/35
After publishing zone corrections, compute and store a PropagatedAt
timestamp on each affected service indicating when old cached records
will have expired. For updated/deleted services, this is publish_time +
old service TTL. For new services, it uses the SOA minimum TTL
(negative cache duration), falling back to the zone's DefaultTTL.
The propagation detection reuses the same service matching technique as
ReassociateMetadata (subdomain + type + ServiceRDataHash). Both the
published snapshot and the WIP zone are stamped.
Add Err() method to storage.Iterator interface and implement it on all
backends (LevelDB, PostgreSQL, in-memory). Update kvtpl.KVIterator.Err()
to fall through to the underlying iterator's error. Check Err() after
every iteration loop across kvtpl and usecase layers to surface I/O
failures and rate-limit errors instead of returning truncated data.
Use a prepared statement with bind variables for Search() to avoid SQL
injection via regex-special characters. Enable client-side rate limiting,
add retry-with-backoff on rate-limit errors in the iterator, fix batch
iteration logic, and expose Err()/Release() on Iterator.
Analogous to internal/provider, extract the service registry (Svc,
RegisterService, FindService, ListServices, OrderedServices, FindSubService,
RegisterSubServices) and the zone analyzer (ServiceAnalyzer, Analyzer,
AnalyzeZone) from services/ into a new internal/service package.
Add/update/delete service calls in the Service facade were bypassing
ActionOnEditableZone, so mutations could silently target a committed or
published zone instead of deriving a new editable snapshot first.
Wraps AddServiceToZone, RemoveServiceFromZone, and UpdateZoneService
with ActionOnEditableZone so the decorator is applied consistently.
Fixes regression introduced by b2b6467575.
Decouple diff computation from executable provider closures by fetching
provider records and computing diffs locally via DNSControlDiffByRecord.
On apply, build a target record set from user-selected corrections using
BuildTargetRecords, then ask the provider for executable corrections
against that target. A published snapshot is inserted at ZoneHistory[1]
while the WIP zone at position 0 remains unchanged.
Add a LogMailer that prints emails to stdout when no mail transport is
configured, eliminating the reflect-based nil interface checks that were
scattered across the authuser package. The App now always injects a
non-nil Mailer, so the usecase layer no longer needs to guard against it.
Expose service editors publicly (no auth required) at /generator for
SEO discoverability. Each page shows an interactive editor alongside
a live DNS zone record preview powered by a new POST
/service_specs/:ssid/records backend endpoint.
Extract List method into a dedicated ZoneCorrectionListerUsecase to
separate concerns, and fix several bugs in Apply:
- Fix early-break condition: track appliedCount instead of using the
correction index, which incorrectly compared against the position in
all corrections rather than applied ones.
- Stop mutating form.WantedCorrections in-place; use a matched slice
to track applied corrections without side effects.
- Fix misleading UserMessage strings that all said "unable to create
the zone" regardless of which step failed.
- Use a single clock call for CommitDate, Published, and LastModified
instead of two separate time.Now() calls producing different timestamps.
- Inject a clock function for testability.
- Improve error messages to include applied/total correction counts.
After AnalyzeZone rebuilds services from raw DNS records, metadata that
cannot be derived from DNS (Id, UserComment, OwnerId, Aliases, TTL, and
service-specific fields like OpenPGP/SMimeCert Username) was lost.
Add a post-processing function ReassociateMetadata that matches new
services to old ones by type and subdomain (using RDATA hashing for
disambiguation) and transfers metadata. Services opt in to body-level
transfer via the new MetadataEnricher interface.
Restructure the service analyzer architecture to improve maintainability:
- Extract recordPool (zone records + mark-delete claiming) and
serviceAccumulator (service registry + domain normalization) as
embedded structs in Analyzer
- Replace swap-delete with mark-delete to eliminate mutation-during-iteration
- Centralize domain normalization using helpers.DomainRelative
- Make Comment/NbResources lazy via Service.MarshalJSON instead of
eager assignment at three separate call sites
- Extract SPF merging from usecase layer into services.CollectAndMergeSPF
- Add GetDefaultTTL accessor and comprehensive Analyzer doc comments
- Add round-trip test infrastructure covering MX, CNAME, CAA, TXT, SPF,
DMARC, GSuite, Origin, Server and more
RFC 7208 requires exactly one SPF record per domain. Previously, the
standalone SPF service and provider services like GSuite each emitted
their own SPF TXT record, producing invalid DNS when both existed.
Introduce SPFContributor interface so services can declare SPF
directives independently. At zone generation time, all contributions
for the same domain are merged into a single SPF record with the
strictest "all" policy winning. During zone import, GSuite claims its
directive via ClaimSPFDirective so the SPF analyzer excludes it from
the standalone SPF service.
Previously the CSRF state, PKCE verifier, nonce, and next-path were
deleted and the session saved before the token exchange. A failure during
exchange or verification left the user with no way to retry without
restarting the whole flow.
Remove the intermediate session.Save(): the in-memory deletions are
discarded on any error so the session keys remain available for a retry.
On success, SessionLoginOK calls session.Clear() + Save() which atomically
consumes all keys. PKCE ensures the authorization code cannot be replayed
independently of the session.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The next query parameter was silently dropped when users chose OIDC
login, always redirecting to / after authentication. Forward the
validated next value to /auth/oidc, store it in the session during
redirect, and use it for the final redirect in the callback, matching
the behaviour of password-based login.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Generate a cryptographically random nonce at redirect time, store it in
the session, and include it in the authorization request. After token
verification, reject the callback if the ID token's nonce claim does not
match the session value, preventing replayed or stolen ID tokens from
being accepted.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Generate a cryptographic code verifier at redirect time, store it in the
session, and send the S256 code_challenge in the authorization request.
Use the verifier during token exchange to bind the code to the session
that initiated the flow, protecting against authorization code
interception attacks.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Log the underlying error server-side and return a generic message to
the client, preventing information leakage of library internals, error
details, and internal URLs through the OIDC callback error responses.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
SHA-1 has known collision vulnerabilities. Switch to SHA-256 when
deriving a deterministic user identifier from the email address in the
OIDC callback, eliminating the risk of crafted email collision attacks.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Reduce SESSION_MAX_DURATION from 365 days to 15 days
- Add SESSION_RENEWAL_THRESHOLD (7 days): sessions are only extended
when fewer than 7 days remain, instead of refreshing on every request
- Align cookie MaxAge with SESSION_MAX_DURATION (derived from the constant)
- Enforce expiry in load(): expired sessions are deleted on first use
and the caller receives an error, preventing Bearer-token replay of
stale sessions that the securecookie age check would not catch
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The handler fetched the session before applying the update and returned
that pre-update snapshot. The client therefore never saw the new
Description or ExpiresOn values. Fetch the session after the update
so the response reflects the persisted state.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Previously IsNew was unconditionally set to false after s.load() even
when load returned an error. Callers that branch on IsNew could treat
a broken/missing session as a pre-existing authenticated one.
Only mark the session as not-new when the load actually succeeded.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Bearer tokens and Basic Auth usernames were used as session IDs without
any format validation, allowing arbitrary strings (including crafted or
very long values) to reach the storage layer as untrusted session IDs.
Restrict accepted session IDs to the exact format produced by
NewSessionID(): standard base32 alphabet [A-Z2-7], exactly 103 chars.
Any token that does not match is ignored, resulting in a new anonymous
session instead of a storage lookup with attacker-controlled input.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Previously, if session deletion failed (e.g. storage error), the error
was silently swallowed. The stale session could still be replayed via
Bearer token even after the client-side cookie was cleared.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The server-side session store (gorilla/sessions backed by DB) reused the
same session ID across login: session.Clear() only zeroed the values map
but left session.ID unchanged. An attacker who planted a known session ID
before authentication retained access after the victim logged in.
Fix with a two-phase save:
1. Delete the old session from the DB (MaxAge=-1 save), expiring the cookie.
2. Reset the underlying gorilla Session.ID to "" so the store generates a
fresh ID, then save the authenticated session with original cookie options
(Secure, Path, MaxAge) preserved via a duck-typed interface assertion.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Previously, RecordFailure/RecordSuccess were only called when a captcha
provider was configured, making brute-force tracking entirely inactive
on deployments without one.
- Always track login failures and successes regardless of captcha config
- When threshold is crossed with a captcha provider: 401 + captcha_required (existing behaviour)
- When threshold is crossed without a captcha provider: 429 + rate_limited flag
- Frontend: show a rate-limited message and disable the submit button on 429
- Add errors.rate-limited translation key to all locales
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When the requested email does not exist, the function returned in
microseconds, while a valid email with wrong password took ~100ms
(bcrypt). An attacker could enumerate valid accounts by measuring
response latency.
Add a dummy bcrypt.CompareHashAndPassword call on the not-found path so
both branches take a comparable amount of time.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Email addresses embedded in error strings could leak to arbitrary log
sinks or error responses as errors propagate. Strip them from the usecase
errors and instead log `IP email: reason` once at the controller level,
keeping fail2ban/CrowdSec-compatible log lines.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Add a bcryptCost constant to centralize the target cost (12), a
NeedsRehash() method that checks the stored hash cost via bcrypt.Cost(),
and trigger a transparent rehash in AuthenticateUserWithPassword when
the stored hash is below the current target.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
bcrypt silently truncates input at 72 bytes. Without an explicit maximum,
a user could set a 200-char password and log in with only the first 72
chars, and very long passwords could be used for a CPU-based DoS.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
DeleteSession was returning 200 with a null body instead of the
semantically correct 204 No Content. Updated the Swagger annotation
to match the new status code.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The field existed but was never written, making it useless for security
auditing. Record the time at each successful login and persist it.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>