Add/update/delete service calls in the Service facade were bypassing
ActionOnEditableZone, so mutations could silently target a committed or
published zone instead of deriving a new editable snapshot first.
Wraps AddServiceToZone, RemoveServiceFromZone, and UpdateZoneService
with ActionOnEditableZone so the decorator is applied consistently.
Fixes regression introduced by b2b6467575.
Decouple diff computation from executable provider closures by fetching
provider records and computing diffs locally via DNSControlDiffByRecord.
On apply, build a target record set from user-selected corrections using
BuildTargetRecords, then ask the provider for executable corrections
against that target. A published snapshot is inserted at ZoneHistory[1]
while the WIP zone at position 0 remains unchanged.
Add a LogMailer that prints emails to stdout when no mail transport is
configured, eliminating the reflect-based nil interface checks that were
scattered across the authuser package. The App now always injects a
non-nil Mailer, so the usecase layer no longer needs to guard against it.
Expose service editors publicly (no auth required) at /generator for
SEO discoverability. Each page shows an interactive editor alongside
a live DNS zone record preview powered by a new POST
/service_specs/:ssid/records backend endpoint.
Extract List method into a dedicated ZoneCorrectionListerUsecase to
separate concerns, and fix several bugs in Apply:
- Fix early-break condition: track appliedCount instead of using the
correction index, which incorrectly compared against the position in
all corrections rather than applied ones.
- Stop mutating form.WantedCorrections in-place; use a matched slice
to track applied corrections without side effects.
- Fix misleading UserMessage strings that all said "unable to create
the zone" regardless of which step failed.
- Use a single clock call for CommitDate, Published, and LastModified
instead of two separate time.Now() calls producing different timestamps.
- Inject a clock function for testability.
- Improve error messages to include applied/total correction counts.
When display_by_groups is enabled, domains are now draggable and group
containers act as drop targets. Dropping a domain onto a different group
updates its group via the API and refreshes the domain list.
After AnalyzeZone rebuilds services from raw DNS records, metadata that
cannot be derived from DNS (Id, UserComment, OwnerId, Aliases, TTL, and
service-specific fields like OpenPGP/SMimeCert Username) was lost.
Add a post-processing function ReassociateMetadata that matches new
services to old ones by type and subdomain (using RDATA hashing for
disambiguation) and transfers metadata. Services opt in to body-level
transfer via the new MetadataEnricher interface.
Restructure the service analyzer architecture to improve maintainability:
- Extract recordPool (zone records + mark-delete claiming) and
serviceAccumulator (service registry + domain normalization) as
embedded structs in Analyzer
- Replace swap-delete with mark-delete to eliminate mutation-during-iteration
- Centralize domain normalization using helpers.DomainRelative
- Make Comment/NbResources lazy via Service.MarshalJSON instead of
eager assignment at three separate call sites
- Extract SPF merging from usecase layer into services.CollectAndMergeSPF
- Add GetDefaultTTL accessor and comprehensive Analyzer doc comments
- Add round-trip test infrastructure covering MX, CNAME, CAA, TXT, SPF,
DMARC, GSuite, Origin, Server and more
When a record matched more than one AnalyzerRecordFilter, it was
appended to the result slice multiple times. Break after the first
matching filter to include each record at most once.
The swap-and-shrink deletion inside a range loop skipped the element
swapped into position k. Since there should only be one matching
record (pointer equality), breaking immediately is both correct and
clearer.
RFC 7208 requires exactly one SPF record per domain. Previously, the
standalone SPF service and provider services like GSuite each emitted
their own SPF TXT record, producing invalid DNS when both existed.
Introduce SPFContributor interface so services can declare SPF
directives independently. At zone generation time, all contributions
for the same domain are merged into a single SPF record with the
strictest "all" policy winning. During zone import, GSuite claims its
directive via ClaimSPFDirective so the SPF analyzer excludes it from
the standalone SPF service.
Previously the CSRF state, PKCE verifier, nonce, and next-path were
deleted and the session saved before the token exchange. A failure during
exchange or verification left the user with no way to retry without
restarting the whole flow.
Remove the intermediate session.Save(): the in-memory deletions are
discarded on any error so the session keys remain available for a retry.
On success, SessionLoginOK calls session.Clear() + Save() which atomically
consumes all keys. PKCE ensures the authorization code cannot be replayed
independently of the session.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The next query parameter was silently dropped when users chose OIDC
login, always redirecting to / after authentication. Forward the
validated next value to /auth/oidc, store it in the session during
redirect, and use it for the final redirect in the callback, matching
the behaviour of password-based login.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Generate a cryptographically random nonce at redirect time, store it in
the session, and include it in the authorization request. After token
verification, reject the callback if the ID token's nonce claim does not
match the session value, preventing replayed or stolen ID tokens from
being accepted.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Generate a cryptographic code verifier at redirect time, store it in the
session, and send the S256 code_challenge in the authorization request.
Use the verifier during token exchange to bind the code to the session
that initiated the flow, protecting against authorization code
interception attacks.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Log the underlying error server-side and return a generic message to
the client, preventing information leakage of library internals, error
details, and internal URLs through the OIDC callback error responses.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
SHA-1 has known collision vulnerabilities. Switch to SHA-256 when
deriving a deterministic user identifier from the email address in the
OIDC callback, eliminating the risk of crafted email collision attacks.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Decode and validate the next query parameter before navigating,
ensuring it is a same-origin relative path (starts with / but not //)
to prevent attackers from redirecting users to external sites after login.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Reduce SESSION_MAX_DURATION from 365 days to 15 days
- Add SESSION_RENEWAL_THRESHOLD (7 days): sessions are only extended
when fewer than 7 days remain, instead of refreshing on every request
- Align cookie MaxAge with SESSION_MAX_DURATION (derived from the constant)
- Enforce expiry in load(): expired sessions are deleted on first use
and the caller receives an error, preventing Bearer-token replay of
stale sessions that the securecookie age check would not catch
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The function silently fell back to creating a new session when session.id
was falsy, which could create unintended API tokens from a partial object.
Session creation is already handled by addSession(); updateSession() now
throws early when no id is present.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The handler fetched the session before applying the update and returned
that pre-update snapshot. The client therefore never saw the new
Description or ExpiresOn values. Fetch the session after the update
so the response reflects the persisted state.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Previously IsNew was unconditionally set to false after s.load() even
when load returned an error. Callers that branch on IsNew could treat
a broken/missing session as a pre-existing authenticated one.
Only mark the session as not-new when the load actually succeeded.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Bearer tokens and Basic Auth usernames were used as session IDs without
any format validation, allowing arbitrary strings (including crafted or
very long values) to reach the storage layer as untrusted session IDs.
Restrict accepted session IDs to the exact format produced by
NewSessionID(): standard base32 alphabet [A-Z2-7], exactly 103 chars.
Any token that does not match is ignored, resulting in a new anonymous
session instead of a storage lookup with attacker-controlled input.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Previously, if session deletion failed (e.g. storage error), the error
was silently swallowed. The stale session could still be replayed via
Bearer token even after the client-side cookie was cleared.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The server-side session store (gorilla/sessions backed by DB) reused the
same session ID across login: session.Clear() only zeroed the values map
but left session.ID unchanged. An attacker who planted a known session ID
before authentication retained access after the victim logged in.
Fix with a two-phase save:
1. Delete the old session from the DB (MaxAge=-1 save), expiring the cookie.
2. Reset the underlying gorilla Session.ID to "" so the store generates a
fresh ID, then save the authenticated session with original cookie options
(Secure, Path, MaxAge) preserved via a duck-typed interface assertion.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Previously, RecordFailure/RecordSuccess were only called when a captcha
provider was configured, making brute-force tracking entirely inactive
on deployments without one.
- Always track login failures and successes regardless of captcha config
- When threshold is crossed with a captcha provider: 401 + captcha_required (existing behaviour)
- When threshold is crossed without a captcha provider: 429 + rate_limited flag
- Frontend: show a rate-limited message and disable the submit button on 429
- Add errors.rate-limited translation key to all locales
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>