About

Editorial standards

Freelance Codex aims to be reliable, not just persuasive. This page defines the editorial bar for what we publish, how we label uncertainty, and how we keep pages maintained over time.

If you only read one thing here, read this:

A page should tell you what is true, what is recommended, what is assumed, and what is uncertain.

That sounds obvious, but much of the internet blends those categories into one confident paragraph, and that's how people get burned.

Our bar is: if a reader could take action based on a sentence, that sentence should be scoped, supported, and labeled.

For how we schedule and document updates, see: Review policy. For how we disclose money relationships that might influence recommendations, see: Disclosure policy.

The “Codex truth” model

We separate information into four buckets and label accordingly:

  1. Verified facts
    Things that can be checked against primary sources or direct evidence. Example: “Net 30 means payment is due 30 days after the invoice date” (definition), or “This platform changed its payout schedule on X date” (when sourced).
  2. Widely accepted expert consensus
    Common best practices supported by experienced practitioners and reputable guidance. Example: “A written scope reduces scope creep.” There are edge cases, but the consensus is strong.
  3. Informed inference
    Reasoning that follows from facts and consensus, with explicit assumptions. Example: “If late payment is recurring, switching to deposits reduces risk” (assumes you can enforce terms and your market tolerates it).
  4. Speculation
    Hypotheses about uncertain futures or situations without strong evidence. This should be labeled and used sparingly.

A page that blends these categories without telling you what's what is not “Codex-quality.” Inference and speculation must be labeled, not smuggled inside factual tone.

What “Codex” means (the canonical page rule)

When we say “Codex,” we mean:

  • One canonical URL per evergreen topic.
    Not five overlapping posts competing for the same keyword.
  • Updated over time with visible change notes.
    Pages should show “last reviewed” and a short change log.
  • Starts with a tight summary, then expands into depth.
    You should be able to get a usable answer in under a minute, then dive deeper only if needed.

This structure fights three common failures:

  • content rot (outdated posts),
  • keyword cannibalization (multiple weak pages instead of one strong one),
  • and cognitive overload (too much context before the answer).

Browse the canonical pages here: The Codex.

Our writing standards (clarity beats cleverness)

Freelance Codex writing follows a “field manual” style:

  • Fast answer first. Put the core guidance near the top.
  • Define terms once. Don't hide meaning behind jargon.
  • Show steps, not vibes. A reader should be able to apply the guidance.
  • Name failure modes. Real life includes late payment, scope drift, and chaos.
  • Be explicit about tradeoffs. Every recommendation has costs.

The page skeleton we aim to use

Most Codex pages follow a consistent format:

  • Codex summary (40–80 words)
  • Who this is for (and who it's not for)
  • If you only do 3 things
  • Steps (the process)
  • Common mistakes
  • Tools and templates (internal links)
  • Jurisdiction notes (where relevant)
  • Status (last reviewed)
  • Change log

You can see this format in action on:

Sources and citations (what we prefer, and why)

We source claims so a reader (or us, later) can verify them, understand scope, and notice when the ground has shifted. Not every sentence needs a citation, but actionable or time-sensitive claims should be supported or clearly labeled as uncertain.

Our source hierarchy

When a page depends on factual claims that could be wrong or could change, we prefer:

  1. Primary sources
    Official guidance, statutes, regulator pages, platform announcements, direct documentation.
    Primary doesn't mean “perfect,” but it does mean you can see the original wording and effective dates.
  2. High-quality secondary sources
    Reputable journalism, expert analysis, established industry publications.
    Secondary sources help with context and cross-checking, especially when they point back to primary documents.
  3. Tertiary sources (used cautiously)
    Blog posts, forums, and social media threads, useful for signals, not for final truth.

When we cite

We aim to cite when:

  • a factual claim is non-obvious,
  • a claim could plausibly be disputed,
  • a topic is high-stakes (contracts, taxes, compliance),
  • or the claim is time-sensitive (platform policies, regulatory updates).

When a primary source is unclear

If the official wording is ambiguous, incomplete, or scattered, we quote what it says, label what is inferred, and default to conservative guidance when stakes are high.

When sources disagree

Disagreement is not a reason to pick the most confident voice. It's a reason to show the disagreement.

When sources conflict, we try to:

  • describe what each source says,
  • explain what's actually disputed,
  • label what's unknown,
  • and use conservative guidance when stakes are high.

That's not “hedging.” It's accuracy.

Recommendations vs facts (and how we prevent accidental advice-giving)

Freelancers often search for “what should I do?” and the honest answer is usually conditional.

So we try to frame recommendations like this:

  • Recommendation: do X
  • Because: reason grounded in experience and risk management
  • Assuming: A, B, C are true in your situation
  • If not: do Y instead

Example (simplified):

  • Recommendation: use milestone billing for high-risk projects
  • Because: it reduces exposure to non-payment and ties payment to progress
  • Assuming: the client is willing to agree to milestones and you can enforce a pause
  • If not: require a larger deposit or reduce scope to a smaller first phase

We also route readers to tools that make recommendations operational:

Handling uncertainty (explicitly, not implicitly)

Uncertainty is normal in freelancing guidance because markets change, jurisdictions differ, and client behavior is unpredictable.

We handle this by labeling assumptions, constraints, risk level (low-stakes workflow advice vs high-stakes legal/tax), and alternatives when a key assumption fails.

Uncertainty labels (the reader-facing tags)

When uncertainty changes what a reader should do, we try to add a short, explicit label. The wording varies by page, but the intent is consistent:

  • Verified: checkable and anchored to primary sources or direct evidence.
  • Consensus: a widely used practice, with known edge cases called out.
  • Inference: a recommendation that depends on stated assumptions.
  • Speculation: a hypothesis about an uncertain future. Useful for planning, not for pretending.
  • Unknown / disputed: where sources conflict or the details can't be confirmed well enough to call them verified.

The goal is simple: you should know whether you're reading a verified fact or a conditional recommendation.

Example (rewrite): Instead of “Late fees are always enforceable,” we'd write: “Late fees may be enforceable depending on your contract and jurisdiction. Treat enforceability as a jurisdiction-specific check and rely on backups like deposits, milestones, or a pause-work clause.”

Example (assumption label): “Recommendation: pause work after X days overdue. Assuming: your contract allows a pause and you're willing to enforce the boundary.”

If you read a page and can't tell what the assumptions are, that's a bug.

Jurisdiction notes (no legal/tax mush)

Taxes and compliance are not universal. A single page that mixes US + UK + CA rules into one paragraph helps nobody.

So we use jurisdiction notes:

  • We keep the evergreen guidance general where possible.
  • We add country notes explicitly (US/UK/CA/AU, etc.) when needed.
  • We avoid pretending the same legal/tax rule applies everywhere.

Jurisdiction notes are also about precision. Some statements are universal (“write things down”), but others depend on the governing law, the parties' locations, and how a platform operates. When a claim crosses that line, we label it.

Example (illustrative):

  • “Use a work-pause clause for overdue invoices” is general.
  • “Late fees are enforceable under X rule” is jurisdiction-specific and must be labeled.

When a topic is highly jurisdiction-dependent, we prefer modular sections and links to primary sources rather than one blended explanation.

If you're working cross-border, there may be more than one relevant jurisdiction. Treat the page as a map of what to verify, not a substitute for professional advice.

Not legal or tax advice (what that actually means)

“Not legal advice” is often used as a talisman. Here's what it means here:

  • We provide educational information and templates.
  • We explain common clauses and workflows.
  • We do not provide individualized legal counsel.
  • For high-stakes situations (large contracts, disputes, compliance risk), we encourage consulting a qualified professional in your jurisdiction.

Concretely, our standard is to help you understand the moving parts, ask better questions, and reduce obvious risk. It is not to tell you “the correct legal answer” for your exact situation.

In practice, we aim to:

  • avoid absolute claims in legal/tax areas,
  • recommend conservative defaults,
  • and route readers to primary sources.

Our terms spell this out too: Terms.

Maintenance standards (the point of the whole site)

A maintained page should show:

  • Last reviewed: when it was checked for correctness
  • Change log: what changed and why

That policy is defined here: Review policy.

When a page should be updated immediately

Some events should trigger an out-of-cycle update:

  • major platform policy changes that affect freelancers,
  • tax/compliance changes in common jurisdictions,
  • or a recurring “failure mode” reported by readers (templates breaking in real use).

Those moments often show up first as Radar posts, then get folded into the canonical page:

Corrections policy (how errors get fixed)

We assume we will make mistakes. The goal is to correct them quickly and transparently.

A “correction” is a change that fixes something a reader could rely on: a factual claim, a step in a process, a template instruction, or a jurisdiction-scoped statement that was too broad.

Not every edit is a correction. Minor typos may not be called out, but any change that could alter what a reader does next should be reflected in the page's change log.

If you find an error:

  1. send the URL,
  2. state what's wrong,
  3. include a source that supports the correction (when applicable),
  4. and tell us what a reader might do incorrectly if they follow the current text.

If you can, quote the exact sentence/step and propose replacement wording. That makes it easier to verify the issue and apply the fix consistently.

Please don't include sensitive personal or client information in a correction request. If a real dispute is the context, anonymize it and focus on the generalizable issue. (See: Privacy policy.)

Use: Contact.

When we correct a factual error, we aim to:

  • update the page,
  • note the correction in the change log,
  • and adjust related pages if the error appears elsewhere.

If the issue is time-sensitive or high-stakes, we may publish a fast clarification while we verify and update the full section.

For the full “what counts as meaningful” framework, see: Review policy.

Editorial independence and incentives

Trust is partly about incentives.

We publish a disclosure policy so readers can see when money could influence recommendations: Disclosure policy.

We also try to avoid building content systems that reward:

  • sensational claims,
  • constant posting without maintenance,
  • or “tool lists” designed primarily for affiliate revenue.

When incentives and truth collide, truth wins, or the site becomes useless.

A practical checklist: “Does this page meet the standard?”

If you're reading a Codex page and wondering whether it's “done,” these are the questions we ask internally:

  • Can a reader get a usable answer from the first screen?
  • Are key terms defined clearly?
  • Are recommendations operational (steps, scripts, templates)?
  • Are assumptions and constraints explicit?
  • Are high-stakes claims sourced or conservatively framed?
  • Are jurisdiction-specific statements labeled?
  • Does the page link to the relevant tools and sibling pages?
  • Does the page show “last reviewed” and a change log?

If the answer is “no” to several of these, the page needs work.

FAQ

Why not just write blog posts?

Because the problems repeat and the details change. Canonical pages + visible updates are a better fit for high-stakes topics like contracts, pricing, payment, and taxes.

Do you cite everything?

Not everything. We cite when a claim is non-obvious, disputable, time-sensitive, or high-stakes. We also separate facts from recommendations so readers can evaluate the reasoning.

How do you handle conflicting advice?

We show the conflict, explain what's disputed, and use conservative guidance where stakes are high. We avoid pretending certainty to sound authoritative.

How do I know a page is current?

Check “Last reviewed” and the change log on the page. The cadence is described here: Review policy.

What does “verified” mean on a practical level?

It means the claim is checkable against a primary source or direct evidence, and it is written precisely enough that a reader could validate it. “Verified” does not mean “permanently true.” If a topic can change (platform policies, regulations), we aim to pair the claim with time context (effective date, “as of,” or “last reviewed”) so you can judge whether to re-check.

Why do some pages avoid exact numbers?

Many numbers are jurisdiction-specific (tax thresholds), market-dependent (rates), or fast-changing (platform fees). When a number is likely to rot, we prefer durable frameworks: how to decide, what to ask, what to measure, and how to build a conservative plan. When a number is essential, it should be scoped, sourced, and tied to an effective date.

How should I report an error or suggest an improvement?

Send the URL, quote the exact line, and tell us what the reader impact is (what someone might do incorrectly). If you have a source that supports the correction, include it. Use: Contact.