Providing secure, authenticated access to an internal service running in ECS
Gone are the days where every employee sits in an office cubicle from 9AM to 5:30PM, Monday to Friday. Having a physical location with the blinking lights of a VPN appliance or whining server is no longer a given and thus the ‘traditional’ approach of whitelisting company IPs and having your colleagues VPN in to the corporate network just isn’t an option for some firms. The flip side to this is that you’re likely running several internal applications that need to be accessible by some but aren’t suitable for having publicly accessible on the internet.
We can combine together two Cloudflare offerings to work around this problem by first tunnelling an internal service that may not even have inbound internet access to Cloudflare’s network and then authenticating users that try and access it against our corporate identity manager. In this example, we’ll be running an instance of Grafana inside Amazon Web Service’s Elastic Container Service and using GSuite to provide the authentication.
Step 1 - establishing the tunnel
Although Grafana has authentication, we may not want to have the server that it’s running on directly accessible over the internet and so we’ll run an ECS service in a private subnet of our VPC. The subnet will have internet access through a NAT gateway to a public subnet. You can find the task definition and Dockerfiles in the article repository.
The first tool in our belt is Argo Tunnel and is a product that allows you to route traffic from Cloudflare’s network to your service without exposing it to the internet. We’ll run the cloudflared daemon as a sidecar container to our service and the Dockerfile & config file below show the setup that we’re going to use to receive traffic from Cloudflare and direct it to the main Grafana container.
We’re going to use a really short configuration file for cloudflared and then
run it with an entrypoint script that will pull down the cert.pem
file from a
Systems Manager Parameter Store parameter. The cert.pem
contains a
certificate, private key and access token that are used to authenticate and
communicate with Cloudflare. It’s generated by running cloudflared login
.
Cloudflared config (/etc/cloudflared/config.yml)
We’re sending traffic to another container running in our Fargate-backed service and as a result we send it to localhost (container ports are bound onto localhost in Fargate with the awsvpc networking adapter). Users will visit the endpoint https://grafana.yourcorp.com to access the internal service.
url: http://localhost:3000
hostname: grafana.yourcorp.com
Entrypoint
After creating the cert.pem
file from an env var that’s sourced from the
Systems Manager Parameter Store, we repeatedly try to establish a connection
with the other container that’s running our service. The example below uses the
/login
endpoint but you can customise it to fit your service. If we start the
tunnel daemon before the origin is ready, it’ll error out and stop so we try for
up to a configurable period (e.g. 60 seconds in this case) to give it time to
start.
#!/usr/bin/env bash
set -ueo pipefail
echo -e "$CERT_PEM" > /etc/cloudflared/cert.pem
ORIGIN=http://localhost:3000/login
set +ex
for i in {1..60}
do
wget $ORIGIN 1>/dev/null 2>&1
if [ $? -ne 0 ]; then
echo "Attempt to connect to ${ORIGIN} failed."
sleep 1
else
echo "Connected to origin ${ORIGIN} successfully."
break
fi
done
set -ex
cloudflared tunnel
Dockerfile
The Dockerfile installs the cloudflared daemon and then installs our config file and entrypoint.
FROM debian:stretch-slim
RUN apt-get update && \
apt-get dist-upgrade --yes && \
apt-get install wget --yes && \
wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.deb && \
dpkg -i cloudflared-stable-linux-amd64.deb && \
rm cloudflared-stable-linux-amd64.deb
ADD entrypoint.sh /usr/local/bin/entrypoint.sh
ADD config.yml /etc/cloudflared/config.yml
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
With the ECS service started, cloudflared will start and connect to Cloudflare.
Step 2 - providing authenticated and secure access
Although the Argo Tunnel would let us provide public access to our internal service, we want to add a layer of authentication to restrict it to only our organisation’s users. We can start by adding a new Access application in the Cloudflare console and then setting up a rule to restrict access to only specific users (if required).
With the Argo Tunnel and Access application connected, we can now visit the subdomain that we chose in the Access setup and will see the familiar login page of our identity / SSO provider, in this case the Google login. Once authenticated, we’re then allowed to access our Grafana container and build the beautiful dashboards that our service team deserves.
What if I want to run multiple copies of the container?
If you’re running an HA service or need multiple copies of the service running,
you can setup load balancing in Cloudflare itself or alternatively put a load
balancer in front of your ECS service and then have one cloudflared
instance
running that points to that LB.