Validates a MX record set (svcs.MXs) at edit time:
- Null MX (RFC 7505): a "." target must be the only MX in the set, with
preference 0. Both deviations are surfaced.
- Targets: invalid hostnames, out-of-range preferences (uint16) and
duplicate targets (case-insensitive on the FQDN).
- Cross-zone: flags MX targets that are CNAME owners in the same zone
(RFC 5321 sec. 5.1) and warns when an in-zone target lacks any
A/AAAA service. External targets are left to runtime checkers.
Unit tests cover happy paths, the null-MX edge cases, target/preference
validation, duplicate detection and the in-zone cross checks (CNAME
collision, missing address, apex target).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
A domain that declines BIMI still needs an enforcing DMARC policy:
without it, an attacker could spoof the domain and claim BIMI logos
in its name, defeating the declination. Skip only the l/a/e URL and
VMC checks under declination, and keep bimi.no-dmarc and
bimi.weak-dmarc-policy active.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The BIMI draft lets a domain explicitly opt out of BIMI by publishing
a record with v=BIMI1 and an empty l= tag. Surface that in the editor
and treat it as a first-class case in compliance.
- bimi.ts gains isBIMIDeclination() and stringifyBIMIDeclination().
- The editor adds a "Decline to participate in BIMI" checkbox; when
checked, the l/a/e fields are hidden and the TXT record is rewritten
to "v=BIMI1;l=". The checkbox is auto-detected when an existing
declination record is loaded.
- The compliance validator detects declination right after the
selector and version checks, emits a single bimi.declination info
message, and skips the URL/cross-record checks that no longer apply.
- Locales updated (en, fr).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Plugs the BIMI editor into the records-compliance framework.
Validators run synchronously and surface:
- Owner-name shape: <selector>._bimi (rejects empty selectors and
non-label characters).
- Version: only "BIMI1" is accepted by the current draft.
- Logo URL: l= is mandatory, must be HTTPS, warns when the path does
not end in .svg.
- VMC URL: optional but flagged as info when missing (Gmail and Yahoo
need it). Must be HTTPS when present; info if it does not look like
a .pem file.
- Evidence URL: must be HTTPS when present.
- Cross-record DMARC check: warns when no DMARC is published or when
every DMARC at the apex sits at p=none, since BIMI is only honoured
with an enforcing DMARC policy.
Locales added under compliance.bimi for en and fr. The new validator
self-registers via $lib/services/compliance/registry.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds the frontend pieces needed to edit a BIMI record from the zone UI:
- $lib/services/bimi.ts exposes parseBIMI / stringifyBIMI. Mirrors the
MTA-STS module: parseKeyValueTxt for the input, ordered v/l/a/e on
output, separator style preserved.
- svcs.BIMI.svelte hybridises the MTA-STS structure (BasicInput per
field) with the DKIM selector handling: a free-form selector text
field (defaulting to "default") drives the TXT owner name as
<selector>._bimi, and the parsed fields drive the TXT value through
a $effect.
Test suite covers minimal/full records, separator preservation, and
the parse/stringify roundtrip.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds support for Brand Indicators for Message Identification (BIMI),
the emerging standard that lets receiving mail clients display verified
brand logos next to authenticated messages.
A BIMI record is a TXT published under <selector>._bimi.<domain> (the
default selector being "default"), carrying a logo URL and an optional
Verified Mark Certificate URL.
The implementation follows the DKIM analyzer pattern for the selector-
aware owner name, and the MTA-STS field-struct pattern for the simple
v/l/a/e shape.
- BIMI struct wraps the TXT record.
- BIMIFields exposes Version, Location, Authority, Evidence with the
happydomain UI tags so that services_specs.ts is generated correctly.
- bimi_analyze picks any TXT under *._bimi.* whose value starts with
v=BIMI, building a service indexed under the apex.
- A test suite covers the default and custom-selector paths, the
rejection of non-BIMI TXT records, and the field round-trip.
services_specs.ts is regenerated.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
SvelteKit reuses the [serviceid] page across navigations, only swapping
the data prop. The inner {#key value} in ServiceEditor was not always
catching the reference change in time because $bindable routes value
reads through the parent's getter, and the change can race with the
componentPromise await/then transition. The result: the underlying
editor (DKIM, SPF, etc.) kept its $state seeded from the previous
service, so the form fields displayed leftover content from the
previously edited record.
Wrapping the ServiceEditor mount in {#key data.serviceid} guarantees a
clean remount every time the user moves to a different service. The
key tracks the only identifier that always changes on navigation,
regardless of whether the type stays the same or changes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Extends the compliance context with findAllServices(type?) so a
validator can iterate every service in the zone, not just a single
subdomain. The DMARC validator now uses it to flag configurations
where alignment is structurally impossible:
- p=quarantine|reject and the zone has neither a DKIM nor an SPF
record -> error: every legitimate message will fail DMARC.
- p=none in the same situation -> warning: DMARC has nothing to
align against, monitoring data will be empty.
- adkim=s (strict DKIM alignment) with no DKIM record published
anywhere in the zone -> warning: only SPF alignment can succeed.
Cross-checks are skipped when the zone state is not provided
(ctx.zone === null), so unit tests and isolated calls keep the
previous behavior.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Wires the new POST /api/resolver/mta-sts-policy endpoint into the
MTA-STS validator. The async pass runs after the local TXT checks,
debounced and cancellable through EditorCompliance, and surfaces:
- Transport-level failures: dns-error, tls-error, fetch-error,
too-large.
- HTTP-level failures: not-found (404), http-error (other non-2xx),
redirect (server tried to redirect, RFC 8461 sec. 3.3 forbids it).
- Policy file content: missing/invalid version, missing/invalid mode,
mode=none (warning, effectively disabled), mode=testing (info),
missing mx in enforce/testing modes, missing/out-of-range max_age
(0..31557600), short max_age (< 1 day, warning).
Adds a fetchMTAStsPolicy() wrapper to $lib/api/resolver.ts that accepts
an AbortSignal so the EditorCompliance debounce + abort plumbing covers
this validator like it does for SPF.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a backend endpoint that fetches and parses an MTA-STS policy
file at https://mta-sts.<domain>/.well-known/mta-sts.txt per RFC 8461
sec. 3.3, paired with the (existing) MTA-STS TXT validator on the front-end.
- happydns.MTASTSPolicyRequest accepts a {domain}.
- happydns.MTASTSPolicyResponse returns the parsed version / mode / mx /
max_age plus diagnostic fields (status, httpCode, errorMsg, body)
so the UI can surface fetch / TLS / parse errors without a second
request.
- Status values cover dns-error, tls-error, fetch-error, not-found,
http-error and too-large.
- Hard caps: 64 KiB body, 5 s connect / TLS timeouts, 10 s overall.
RFC-mandated no-redirect-follow is enforced via CheckRedirect.
- Unknown keys, blank lines, comments, CRLF and case differences in the
policy file are tolerated; max_age values that do not parse as int
are dropped silently rather than failing the whole fetch.
Unit tests cover the policy parser (minimal, multi-mx, CRLF + comments,
case folding, junk lines, non-numeric max_age) and the empty-domain
guard.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds checks for svcs.TLS_RPT against RFC 8460 sec. 3.
The validator surfaces:
- Wrong owner name (must be _smtp._tls.<domain>).
- Missing or non-TLSRPTv1 v= tag.
- Missing rua= report destination.
- Empty entries inside rua=.
- rua URIs that are neither mailto: nor http(s):.
- Malformed mailto URIs (missing @ or domain).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds checks for svcs.MTA_STS against RFC 8461 sec. 3.1.
The validator surfaces:
- Wrong owner name (must be _mta-sts.<domain>).
- Missing or non-STSv1 v= tag.
- Missing id= tag.
- id= containing characters outside [A-Za-z0-9] or longer than 32 chars.
The TXT only carries the policy pointer; the actual policy file at
mta-sts.<domain>/.well-known/mta-sts.txt is out of scope here and will
need an HTTPS fetch (out of scope for the sync pass).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds compliance checks for svcs.DMARC against RFC 7489.
The validator parses the published TXT and surfaces:
- Wrong owner name (record must live at _dmarc.<domain>).
- Missing or non-DMARC1 v= tag.
- Missing, unknown, or "monitoring-only" p= policy.
- Invalid sp= subdomain policy.
- Invalid adkim/aspf alignment values.
- pct= out of [0..100] (error) and pct < 100 (info, partial deployment).
- Non-positive or non-numeric ri=.
- Unknown fo= entries (0 / 1 / d / s) and unknown rf= formats (afrf).
- Empty or malformed rua/ruf URIs (mailto and http(s) accepted; mailto
size suffix !N preserved).
A 25-case test suite covers each issue id, plus happy paths for a
minimal reject record and rua mailto/http URIs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Semicolons have no syntactic role in SPF (RFC 7208 sec. 4: terms are
space-separated), but they are the separator used by DKIM, DMARC,
MTA-STS and other key=value TXT records. When a SPF editor's TXT slot
inherited residue like "v=spf1 -all;k=rsa", parseSPF would treat
"-all;k=rsa" as a single directive token and the residue would stick
to the last term instead of being surfaced as a stray field.
Splitting on whitespace AND semicolons turns the residue into separate
directives that the validator can flag as duplicates / unknown
mechanisms, and lets the editor display them as removable rows.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When a service has its type changed in place (e.g. DKIM to SPF), the
underlying value.txt.Txt may still contain the previous record's
content. parseSPF would then treat the entire foreign string as the
"v=" tag, leaving fragments like ;k=rsa glued to the last directive
once new SPF terms got serialized back.
Guard the editor: if the existing TXT does not start with v=spf1,
reset it to "v=spf1 -all" before parsing. The guard only triggers on
foreign content, so legitimate SPF records are preserved as-is.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Wires the new POST /api/resolver/spf-flatten endpoint into the SPF
validator. The async path runs after the local checks, debounced and
cancellable through EditorCompliance, and surfaces:
- spf.recursive-many-lookups / spf.recursive-too-many-lookups based on
the recursive lookupCount returned by the backend
- spf.too-many-void-lookups when more than 2 NXDOMAIN/NoData responses
occur during the walk (RFC 7208 §4.6.4)
- per-include diagnostics: spf.include-loop, spf.include-no-spf,
spf.include-resolver-error, spf.include-error — pointing at the exact
domain and mechanism that failed
The async pass is skipped entirely when the local lookup budget is 0,
to avoid a network roundtrip on records that obviously cannot exceed
the limit.
A small flattenSPF() helper is added to $lib/api/resolver.ts to wrap
the auto-generated SDK call and accept an AbortSignal.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Extracts the SPF parser/serializer out of the editor into
$lib/services/spf.ts (matching dmarc.ts / mta_sts.ts) and adds a sync
validator that flags non-recursive issues against RFC 7208:
- missing or wrong v=spf1
- absence / multiplicity / non-final placement of ‘all’
- redirect= combined with ‘all’ or duplicated
- ptr deprecation (RFC 7208 §5.5)
- local DNS-lookup budget (warn ≥8, error >10) — recursive flatten will
come later via an async backend endpoint
- mechanisms missing values, empty terms, duplicates, length cap
24 unit tests cover positive and negative cases. The editor itself loses
its inline parser and reuses the shared module; behavior is unchanged.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a recursive SPF flatten endpoint sized for the compliance UI:
- happydns.SPFFlattenRequest accepts a {domain, record?} pair so the UI
can preview an unsaved record without persisting it first; the optional
inline record bypasses the root TXT lookup.
- happydns.SPFFlattenResponse returns the recursive tree with per-node
Mechanism / Domain / Record / LookupsHere / Error fields, plus the
RFC 7208 §4.6.4 budget counters (LookupCount, VoidLookups, Exceeded,
VoidExceeded, Truncated).
- Hard caps at 10 lookups, 2 void lookups, depth 12, 2s per query and
10s overall. Cycle detection via the visited-domain set.
- Resolver selection mirrors ResolveQuestion (local / custom / default
to 1.1.1.1) with the same IPv6-bracket handling.
Unit tests cover the term parser (qualifiers, modifiers, ip4/ip6
edge cases), record selection from a TXT bundle, local lookup counting,
loop detection and budget overrun. DNS-dependent recursion is exercised
through integration calls.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Previously the compliance section stayed completely hidden until at
least one issue was reported, which made it indistinguishable from a
broken validator hookup. The panel now renders as soon as a validator
is registered for the current service type, with a "All checks passed"
status when the issue list is empty and the async pass is idle.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Plugs the first per-record compliance validator into the framework.
Validates a DKIM TXT record (svcs.DKIMRecord) at edit time:
- Selector: must be present, must match the label charset.
- Version: only "DKIM1" is accepted (RFC 6376 sec. 3.6.1).
- Public key: detects missing p=, empty p= (revocation, warning), and
non-base64 payloads. Warns on RSA keys shorter than ~2048 bits and
errors on RSA keys shorter than ~1024 bits per RFC 8301.
- Algorithms: warns on SHA-1 (RFC 8301) and unknown hashes; flags
unknown key types or service types.
- Flags: surfaces t=y (testing) as info; warns on unknown flags.
- Granularity: marks g= as deprecated since RFC 6376.
A unit test suite exercises every issue id plus a few happy paths and
the empty-input case.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Introduces the frontend-only compliance framework that lets each editor
contribute spec-conformance checks. The infra is self-contained:
- $lib/services/compliance.ts exposes ComplianceIssue/Severity/Validators
types plus a small registry API.
- $lib/services/compliance/registry.ts is the central side-effect import
point where per-record validators get wired up commit by commit.
- EditorCompliance.svelte renders sync issues immediately and runs async
validators with a debounce + AbortController; it stays hidden when no
validator is registered and when there are zero issues.
- ServiceEditor.svelte mounts the panel under every loaded editor; no
per-editor wiring is needed.
- locales: new "compliance" namespace (en, fr) for shared UI strings.
No validators are registered yet, this commit is intentionally inert.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Replace dueling parse/stringify $effects across service editors with
one-time top-level init plus a single write-back $effect. Remount
editors via {#key value} in ServiceEditor so children no longer need
inbound-sync logic.
Domain- and user-scoped consumers were missing every discovery entry
published below their scope. The exact-match prefix dscent-tgt|{u/d/s}|
introduced in 9c6398b1b only returned entries stored at the literal
target string, so a domain-scoped consumer like checker-tls or
checker-caa never received the tls.endpoint.v1 entries that
service-scoped producers (checker-dane, checker-smtp, checker-sip,
checker-srv, checker-stun-turn) publish under the same domain. The
symptom on the consumer side was "No TLS endpoints have been discovered
for this target yet." even when producers had run.
Drop the trailing "|" from the iteration prefix when the target lacks a
ServiceId (and the DomainId for user scope) so the prefix scan matches
narrower scopes too. RawURLEncoded identifiers contain neither "/" nor
"|", so slash boundaries in the encoded "u/d/s" target form remain
unambiguous. Service-scoped lookups stay exact. Each matching key is
parsed back into its actual stored target before fetching the primary
record, so the returned StoredDiscoveryEntry.Target reflects where the
entry was published, not the (broader) target that found it.
Propagate the persisted CheckEvaluation.States through BuildReportContext
and the HTTP report transport so reporters can render rule-driven
sections (hints, severity) without re-deriving them from raw data. When
no evaluation is available the context carries nil states, matching the
SDK's documented nil-safe fallback to data-only rendering.
Rules now return []CheckState, the engine stamps RuleName from the rule,
and the HTTP rule-result lookup matches on RuleName rather than Code.
domain_contact emits one state per role (Subject) instead of a
concatenated single-state message.
Complete the ReportContext composition path so reporters can fold
downstream observations into their output:
- checker.BuildReportContext wraps a raw payload plus the engine's
RelatedObservationLookup in a lazy ReportContext: Related(key) is
resolved on first access and cached. When no lookup is wired the
context falls back to sdk.StaticReportContext, matching the
pre-existing behaviour.
- GetHTMLReportWithContext / GetMetricsWithContext: new helpers that
accept a pre-built ReportContext, for callers that want to feed
Related into a reporter explicitly.
- The execution controller now builds a ReportContext via the
engine's RelatedLookup method before calling the HTML reporter.
When the engine is wired with discovery storage, the reporter sees
the producer's consumer lineage through ctx.Related(consumerKey).
- HTTPObservationProvider implements CheckerHTMLReporter and
CheckerMetricsReporter: both forward to POST /report with
ExternalReportRequest{Key, Data, Related}. A 501 response is
surfaced as an explicit "does not support /report" error. These
methods are available for callers that want to route reports to
remote checkers; the default in-process reporter dispatch is
unchanged.
Close the discovery loop described in docs/checker-discovery.md: entries
published in commit 3 now feed consumer checkers, and their observations
flow back to the original producer.
Three tightly-coupled changes:
- CheckerOptionsUsecase gains an optional DiscoveryEntryStorage
dependency (WithDiscoveryEntryStore). When a checker declares
AutoFill="discovery_entries" on an option,
BuildMergedCheckerOptionsWithAutoFill populates it with the entries
stored for the target: all producers, no host-side filtering by
Type. The method also returns the concrete list of entries injected
so the engine can persist lineage for them.
- CheckerEngine records a DiscoveryObservationRef per (entry, obs key)
tuple after the snapshot is stored. The ref namespaces back to the
*producer* (ProducerID, Target, Ref) while carrying the consumer's
key and the snapshot pointer, so a later GetRelated from the
producer can reach the consumer's observation in one lookup.
- ObservationContext exposes SetRelatedLookup (called once per run by
the engine) and implements GetRelated on top of the installed
closure. The engine's closure walks the producer's published
entries, resolves each ref's observation refs, loads the snapshots,
and materialises []RelatedObservation. Stale refs (entry gone,
snapshot TTL'd) are skipped silently: implicit GC, as the doc
permits.
Wire the newly-added DiscoveryEntryStorage into the execution pipeline:
- ObservationContext tracks DiscoveryEntry records published by each
provider. After Collect, providers that implement DiscoveryPublisher
are asked for their entries (on the native Go value, no JSON round
trip), and the results are cached by observation key.
- HTTPObservationProvider also implements DiscoveryPublisher: it
records the Entries field of the remote /collect response and
surfaces them through DiscoverEntries. Each override instance is
scoped to a single execution run, so no locking is needed.
- CheckerEngine.runPipeline calls ReplaceDiscoveryEntries after
persisting the snapshot, always replacing the previous set for
(checkerID, target), including when a run produces none, so stale
entries from earlier cycles self-heal.
Introduce the two KV indexes that back the cross-checker discovery
mechanism described in docs/checker-discovery.md:
dscent|{producer}|{target}|{type}|{ref} primary record
dscent-tgt|{target}|{producer}|{type}|{ref} target lookup (auto-fill)
dscobs|{producer}|{target}|{ref}|{consumer}|{k} observation lineage
dscobs-snap|{snapshotId}|... cascade on snapshot delete
ReplaceDiscoveryEntries is the canonical publication path: the whole
set previously stored for (producer, target) is cleared, then the new
set is written. The observation-lineage side uses a single upsert per
(producer, target, ref, consumer, key) tuple, with a snapshot-scoped
reverse index so deleting a snapshot cascades cleanly. Putting a ref
under a new snapshot removes the previous snap-index so a later
cascade on the old snapshot does not wipe the refreshed primary.
Adds StoredDiscoveryEntry and DiscoveryObservationRef to the host-only
model, DiscoveryEntryStorage / DiscoveryObservationStorage to the
checker usecase storage surface, embeds both in storage.Storage, and
regenerates the instrumented wrapper. Unit tests cover round-trip,
atomic replace, multi-producer aggregation, upsert, and cascade
delete.
No pipeline wiring yet.
Update happyDomain to the new checker-sdk-go reporter contract, where
CheckerHTMLReporter.GetHTMLReport and CheckerMetricsReporter.ExtractMetrics
take a ReportContext instead of a raw json.RawMessage. The ReportContext
will later carry cross-checker related observations; for now every call
site wraps the raw payload via sdk.StaticReportContext, so behavior is
unchanged.
Also re-export the new discovery-related SDK types (DiscoveryEntry,
DiscoveryPublisher, RelatedObservation, ReportContext,
AutoFillDiscoveryEntries) as aliases under happydns, and satisfy the
extended ObservationGetter interface on ObservationContext and the
test stub with a no-op GetRelated.
No new behavior: plumbing for the upcoming discovery pipeline.
The predicate guarding service-checker auto-scheduling was duplicated
across buildQueue and two sites in NotifyDomainChange. Pull it into a
single helper so the rule lives in one place.
Service-level checkers without LimitToServices no longer get enqueued
for every matching service: they must be activated explicitly via a
CheckPlan. Domain checkers and service checkers that declare a
LimitToServices whitelist keep their previous auto-discovery behavior.
Extend the admin backup to cover checker configurations, plans,
evaluations and executions — previously these were stored but silently
lost on restore. Add RestoreX storage methods so primary records keep
their original Id and secondary indexes are rebuilt (Create* generates
new IDs, Update* requires an existing record to clean stale indexes).
Thread a dropInvalid bool through every TidyUpUseCase method and
expose it as a drop_invalid query parameter on POST /tidy (default
true). When set, Tidy deletes records that fail to decode — e.g.
legacy executions and evaluations whose CheckState.Status was stored
as a string before the SDK switched it to int — instead of leaving
them stuck in the store to log on every iteration.
Also reset KVIterator.err on exhaustion so a prior decode failure
does not surface as a spurious iteration error.
The whoisparser library does not return ErrNotFoundDomain for Verisign
"No match" responses — it parses them into a result with an empty
Domain field. Add a post-parse check to detect this case and return
ErrDomainDoesNotExist.
Replaces the three REST count calls with a single Prometheus scrape that
auto-refreshes every 15s, surfaces queue/worker/in-flight/RSS/version/uptime
as featured cards, and tucks counters and Go runtime stats under a
"Show more metrics" Collapse.