Skip to main content

Working Locally with HTTPS and All Your Services

Working as a team on a professional setup that implies many “micro/macro” services is challenging and can lead to invisible and unexpected errors if not set up well. Time to market, first time to success, and reducing the feedback loop are important topics for all your teams. This article is about setting up your project for a successful production workflow.

Working Locally with HTTPS and All Your Services

When it comes to implementing a web application, best practices enable you and your teams to work efficiently.

They may first appear to you as a useless complexity; they are definitely key to scaling your team and your project.

The Twelve Factors is a public and well-known methodology that gives you 12 key topics to respect to improve your production workflow.

In this blog post, we will not review all of the factors, but we will focus on the most unrespected one: the development/production parity:

“Keep development, staging, and production as similar as possible.”

One of the underlying goals of this topic is to reduce the feedback loop in terms of errors/bugs. We want our workflow to fail as quickly as possible. This means we don’t want to wait for the CI/CD and definitely not for a bug report in production.

To satisfy this exigence, you need to run the project locally (and run the tests locally).

In the Jamstack ecosystem, using Node and NPM/YARN, it’s usually really easy to start an app with “npm start” or “yarn start.” All the different frameworks will give you a procedure to start your app in no time. This is also true for other ecosystems like PHP, for example.

And 95% of the time, you get your project running on http://localhost:3000; in a microservice/decoupled architecture, you might have different apps/services. Some that you own, develop, and deploy, and others that you use.

With Crystallize, you usually have a Frontend and a Service API that you own both. And you might have Mailcatcher to catch the emails, RabbitMQ for a Message Broker System to enable more asynchronous tasks. And maybe a database for your business logic on your Service API that you run and manage through Docker. You’ll finally have 

You would run them with NPM. (Pro tip: using Volta.sh you can simplify a lot your local node and yarn installation per folder and, therefore per project) and you’ll be ready to develop your application using services that will exist in STAGING and PRODUCTION that you replicate locally, thanks to Docker.

“That’s not enough!”

Your two applications do not respect the parity, and you will miss many constraints locally, and you will end up with bugs (or useless features) in the CI/CD and/or in STAGING or even worse in PRODUCTION.

Same Domain

First, in the situation mentioned above, both your apps share the same domain: localhost, which most likely is not what will happen in PRODUCTION with a decoupled architecture.

Cookie Sharing

Same domain means sharing the Cookies, and it means that you might develop a full authentication system based on that locally. But, this won’t work in production where domains are not the same. You should reproduce that situation locally.

Same Origin

The same origin means you won’t have any Cross-Origin Resource Sharing (CORS) errors. Your browser will let you do any call you want from your Frontend to your Service API.

Once again, in Production, you will have constraints, and you should be able to anticipate them locally.

HTTP Only

Second, you run this in HTTP only.

A common belief is that your applications shouldn't care (or even know) whether they're served via HTTP or HTTPS since that's essentially an infrastructure concern. 

This is true for server-only kinds of applications (like the Service API), but for the Frontend is not true.

That’s very likely you will have HTTPS on PRODUCTION. HTTPS is good for security and SEO, but not only! 

You have browser features that are only available in HTTPS:

  • HTTP2
  • Service Workers
  • Notifications API
  • Secure Cookie flag

While those features are allowed by common browsers in HTTP when using localhost, if you have custom domains, you will need HTTPS to experiment with them.

“So to respect the dev/prod parity you should have specific domains over HTTPS locally.”

Solution

Custom Domains

For the custom domains, it is actually really easy to achieve, you can just add an entry to your /etc/hosts file

127.0.0.1 service-api.app.crystal frontend.app.crystal

HTTPS Certificates

For HTTPS, it is a bit more complicated (but can be automated; stay with us). For HTTPS, you need valid certificates, and to have valid certificates, they need to be signed by a Certification Authority. But you don’t want to pay for a valid and signed certificate for a local development domain. (nor to play with Let’s Encrypt or similar)

The solution is to set up your local machine as a Certification Authority and tell your browser(s). And of course, there is a tool for that: mkcert

mkcert -install

Done! Now we need to generate certificates for the domains that we want:

mkcert service-api.app.crystal frontend.app.crystal

That’s it!

HTTPS Proxy

Now we have our valid certificates, we need something to handle HTTPS and proxy to your apps, and to do that, in April 2022, Caddy Server is the simplest.

The configuration file is self-explanatory:

service-api.app.crystal {
    tls domains.pem key.pem

    reverse_proxy 127.0.0.1:3001 {
        header_up Host                {host}
        header_up X-Real-IP           {remote}
        header_up X-Forwarded-Host    {host}
        header_up X-Forwarded-Server  {host}
        header_up X-Forwarded-Port    {port}
        header_up X-Forwarded-For     {remote}
        header_up X-Forwarded-Scheme  {scheme}
    }
}

frontend.app.crystal {
    tls domains.pem key.pem

    # Useful for Remix Run and/or if you have LiveReload enabled and/or if you just have websockets
    # @websockets {
    #     header Connection *Upgrade*
    #     header Upgrade websocket
    # }
    # reverse_proxy @websockets localhost:3002

    reverse_proxy 127.0.0.1:3000 {
        header_up Host                {host}
        header_up X-Real-IP           {remote}
        header_up X-Forwarded-Host    {host}
        header_up X-Forwarded-Server  {host}
        header_up X-Forwarded-Port    {port}
        header_up X-Forwarded-For     {remote}
        header_up X-Forwarded-Scheme  {scheme}
    }
}

And then you can run the Caddy Server:

caddy start --config Caddyfile

Then you have:

Automations

To go further, we recommend two more steps:

  • Automate this install (via Makefile, for instance)
  • Enforce that automation by testing it in your CI.

Why? Because first-time success for newcomers on a project is crucial. A project must be easily installable on the workstations in less than 30 minutes by your teams.

Local development must be comfortable to work with, and it should include a “development” mode and the “production” mode. It’s imperative to give the developers a way to implement features with all the comfort of the development mode:

  • Better debug
  • Profiling
  • More detailed error messages
  • No cache

That is also critical for them to test in production mode with cache enabled locally. (HTTP Cache included). You probably will have HTTP Cache, and you will probably want to test cache expiration and purge, so you need to replicate it locally.

At Crystallize, we provide a Starter Kit (for Node) that includes these automations.

To automate this automation (using the Kit) in your CI, you could have something similar to this:

apk add caddy make curl
wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.3/mkcert-v1.4.3-linux-amd64
mv mkcert-v1.4.3-linux-amd64 /usr/bin/mkcert && chmod +x /usr/bin/mkcert
echo "127.0.0.1 frontend.app.crystal service-api.app.crystal" >> /etc/hosts
make install
cd frontend && npm start &
cd service-api && npm start &
caddy start --config provisioning/dev/Caddyfile
curl -s -I https://frontend.app.crystal | grep "HTTP/2 200"
curl -s -I https://service-api.app.crystal | grep "HTTP/2 200"

Wrapping Up

So now you can run your project and all the services that you can replicate using Docker locally on your computer in Production Like conditions. Moreover, you can simply share this project with your coworker for them to install it and contribute as you enforce the installation in your CI.

The next step should be to have a Docker Compose file that brings Varnish for HTTP. You could locally reproduce the CDN behavior, implement and test your feature, and ensure that the HTTP Cache is tagged, purged, and expired when needed (learn more about  Event-Driven HTTP Caching).

Another thing could be to add Mailcatcher (or similar in your Docker network), so you can set up your application (via environment variables) to send emails with Mailcatcher so you can start to build beautiful emails and test locally (without using your Gmail account or to test in production only)

A more advanced step would be to add RabbitMQ (or any Message Broker) to enable horizontal scaling, workers, and queues! Always make sure your application auto configures itself via Environment variables, so if in Production you’re not using RabbitMQ but AWS SQS, it will adapt (you can also use SQS from your local).