Hello, Future Colleague?

BLOG↗

This write-up covers the architectural decisions behind the coding challenge. Where trade-offs were made, I've tried to explain the reasoning rather than just the outcome.

Content Types in Prismic:

I defined six types and two slices for this project: blog_page , blog_home, author, category, global_settings, and home, BlogHero, BlogBody:

Global Settings was implemented to manage global state across the website, such as nav bar, footer and global fallbacks. I think this is an important baseline to keep technical SEO health high if content teams miss any fields (SEO) during page creation.

Blog page uses two custom slices called Blog Hero and Blog Body, this allows content teams with flexibility to build out pages. This composition also gives us flexibility to add more slices in the future, play around with ordering of slices, A/B testing, etc. For example I was quickly able to implement a home page and use BlogBody slice to create this write up.

Also I encapsulated all blog stylings in a blogComponent const that I passed to prismic RichTextField, which allows for quicker alteration of branding at a future date.

The blog_home type includes a featured posts relationship field, giving editorial teams control over surfaced content without requiring a new deployment. I also exposed all content as fields so that content teams can still manage the page effectively while the logic is not exposed to Prismic.

I made two custom types author & category, this is to further support reusability of these types in other parts of the website. This decision helps scalability of the website and reduces repeatable code, and also makes the product fully CMS driven.

Rendering Strategy:

For rendering strategy I went for two different approaches for each page type, I believe the two use cases had different requirements.

Blog Page Static Site Generation (SSG)

Blog pages (/blog/[uid]) are fully statically generated at build time via generateStaticParams(). This queries all published blog_page documents from Prismic and pre-renders every article as static HTML before the site is deployed.

This is very important for SEO and crawlability of the page, as we serve pre-rendered HTML immediately with no server execution time, no waiting for data fetching. and once published the content is not updated frequently. The page is also cacheable at the CDN edge, which means consistent response times regardless of origin load.

Blog Home Incremental Static Regeneration (ISR)

The listing page (/blog) uses revalidate = 500 (~8 minutes). A fully static approach isn't suitable here because the listing is driven by pagination and category filter search params and pre-building every combination at build time isn't practical.

ISR gives flexibility and the page is served from cache and regenerated in the background when the cache expires, so users and crawlers always get a fast response.

On-demand revalidation via Prismic webhook

However setting time-based revalidation is a safety net, not the actual update mechanism. The Prismic client in production uses force-cache with a "prismic" cache tag. When a content editor publishes a change, Prismic fires a webhook to /api/revalidate, which calls revalidateTag("prismic") and immediately purges all cached Prismic responses. This allows for new content to appear within seconds of publishing and not after the next revalidation window.

Infrastructure & CI/CD

This was a deliberate focus area, because I wanted to showcase my ability to manage the product end to end. For a corporate website the deployment pipeline is as much a product as the site itself and having experience managing the Talon.One website and what was required in setting up that infrastructure, I wanted to showcase those skills.

Docker: Multi-Environment Setup

 Three environment configurations — development, staging, production — each with:

Staging exists specifically so content teams can preview larger changes — new slice types, redesigned templates against real CMS data before they touch production.

GitHub Actions

Three parallel jobs run on every pull request:

This ensures no degradation of code quality, type issues or components failing reach production.

A separate workflow validates Docker builds across all three environments on PRs. Catching a broken Dockerfile before merge is cheaper than debugging it after.

 Lighthouse CI runs against the production URL with score thresholds enforced as hard errors for SEO (≥0.6) and accessibility (≥0.6), and warnings for performance and best practices (≥0.5). Framing SEO and accessibility as blocking checks signals that regressions in those areas are treated as bugs, not nice to haves.

Health Check API

/api/health exists to provide a standardised monitoring contract for the application. It calls client.getRepository(), the lightest Prismic request that validates the full network path without pulling any content data and returns structured dependency status, Prismic round-trip latency, and process uptime.

The endpoint returns 503 on failure rather than a 200 with an error body. This is deliberate as load balancers, uptime monitors, and tools like Datadog act on HTTP status codes, not response bodies.

For this project I set up a basic dashboard from uptime robot to monitor health of my website.

DASHBOARD↗

Frontend Architecture

Component Rendering Model

The blog home avoids the common anti-pattern of placing 'use client' at the page level and fetching Prismic data inside useEffect. Server components handle data fetching; client components are scoped to interactions that genuinely require them:

/app/blog/page.tsx (Server)
 └── /components
      ├── CategorySelectClient.tsx (Client)  <-- 'use client'
      ├── BlogList.tsx (Server)
      │    └── Card.tsx (Server)
      └── BlogPaginationClient.tsx (Client)  <-- 'use client'

 

Component Architecture

For the component library I tried to implement two-layers, with a clear boundary between UI components and feature logic.

  This separation of concerns matters for a few reasons and is crucial in next js client/server setup:

The base components themselves are thin wrappers over Ark UI primitives. Ark UI provides out of the box accessibility behaviour such as ARIA attributes, keyboard navigation, focus management. This means the accessible behaviour is correct by default and doesn't need to be re-implemented or re-tested for eac use case.

Design System

Tailwind utility classes are used through design tokens only — not arbitrary values like max-w-[1200px] or bg-[#B9FF3F]. Hardcoded values in class names make brand updates expensive because there is no single source of truth. Tokens centralise that. As a result I tried to avoid using arb vales where possible and from reviewing mosses website is quite common.

Accessibility

Ark UI was chosen as the headless component library. It ships with correct ARIA attributes and keyboard interaction patterns by default, reducing the surface area for accessibility regressions. Lighthouse accessibility is enforced in CI as a hard threshold.

Technical SEO

Testing

I used Jest and React Testing Library to create unit tests for both base components and feature wrapper components. I didn't implement full test coverage, only on some select components to showcase skills. Some of the key patterns:

I used Claude Code to define test cases and implement tests and work through some type issues.

 

Production Considerations

Observability is the most immediate gap. The health check provides basic uptime, but there is no APM or error tracking. The next step would be Datadog for server-side performance metrics and Sentry for error capture.

Error boundaries at the slice level would allow a single broken component to fail without taking down the whole page.

What I'd Prioritise Next

Design Implementation

I used Figma file as a reference for layout, spacing, and component composition but not as a pixel-perfect specification. The brief explicitly deprioritised visual accuracy in favour of clean structure and clear decisions, and I made the same trade-off deliberately (therefore this is not a 1 to 1 copy). Of course I could have spent some time on mosses website and extracted tailwind classes or used swiper.js as Carousel library.

                                                                                                       

I focused on demonstrating end-to-end ownership of a production website: component architecture, CMS integration, rendering strategy, SEO, infrastructure, and CI/CD.

Tailwind was used to get close to the design intent — responsive layout, spacing system, brand colour — without losing time to fine-grained visual polish that would have come at the expense of the things that matter more for this role.

                                                                                                                                     

What I'd do on a real project is work more closely with design during implementation, use a design token system agreed with the design team upfront, and treat visual QA as a proper sign-off gate before shipping. For a challenge with a time constraint, I made a deliberate call about where I should prioritise.

                                                                                                                                

A Final Note on Submission

The challenge was submitted on Friday, but I continued working on it Saturday as I felt it wasn't up to my own standards and felt there was more I wanted to demonstrate particularly around infrastructure, CI/CD, and production readiness, that I hadn't had time to show properly during the working week due to limited time constraints. If this is an issue however, we can always roll back to that sha and go from there.