Self-hosting with and without ngrok
By Sam Rose
October 27, 2025
By Sam Rose
October 27, 2025
You've got a web app idea, you've been coding it up and you finally feel ready to get it online and get people using it. This is a step-by-step guide for a setup that'll get your idea online for a flat $6.50/month + what your domain costs, and scale to thousands of users.
I'm going to assume you know how to code and use the command line, but haven't self-hosted anything before.
I'm going to use:
Yes, this is a blog post on the ngrok.com blog and yes, I could use ngrok for some of these jobs. I want to show how you can build up the moving parts yourself first, if you want to. If you'd prefer to use ngrok instead, there will be a section at the end explaining how.
Your web app idea is brilliant, there's no doubt in my mind about that. However, I don't know what it is, so let's keep things simple. I went searching through ngrok's sample repositories and I found this gem: a Go HTTP API that returns facts about desert tortoises.
$ git clone git@github.com:ngrok-samples/example-api-go.git
$ cd example-api-go
$ go build main.go
$ ./main --port 8080
API is running on http://localhost:8080If you don't have the Go language tools installed you can follow the official installation instructions to get them.
$ curl -X GET http://localhost:8080/random
{
"fact": "Desert tortoises can distinguish between different colors and shapes.",
"id": "DT039"
}This is delightful. Let's get it onto the Internet.
While it's technically possible to use your laptop or a Raspberry Pi running in your closet to host this API, there are good reasons not to. Your ISP may disallow web hosting in their terms of service, for example. If you aren't careful, you may also create a way for hackers to get into your home network by accident.
Let's pick a safer option. I'm going to run the desert tortoise API on a Virtual Private Server (VPS).
There are many companies that sell VPSs. These companies have computers running in datacenters all over the world, and they divide them into smaller “virtual machines.” A 64 core machine with 128GB of RAM might get split into 16 virtual machines, each with 4 cores and 8GB of RAM. You do end up sharing with other people, but you get lower prices in return. The virtual machines are isolated from each other so no one can access your files.

How these servers tend to look inside datacenters. Endless 19-inch racks of computers. These are Dell PowerEdge R510s owned by the Wikimedia Foundation.
I'm choosing to use OVHCloud for this post. At time of writing they offer a 4 core, 8GB RAM VPS for £4.80/mo (~$6.50), and they don't charge for network bandwidth. Some providers will give you your first few terabytes of bandwidth per month for free and then charge you after that, which can catch you out if your site goes viral while you're asleep.
I chose OVHCloud's VPS-1 plan, and I went with one of their France regions because it's close to me. You'll want to start off in a region you think is going to be closest to your users, because latency is higher the further away they are. Higher latency, longer page loads.

I stuck mostly to the defaults, the only changes I made were going with the “no commitment” option and selecting France. The default operating system OVHCloud install on their VPSs is Ubuntu, and this is a solid choice for any web application you'll want to deploy.
After checking out, I had to wait a little while for them to set my VPS up. This took about 2-3 minutes. When it was done, I got this email:

The important details here are that I can access my our VPS can be accessed at vps-4f0acab8.vps.ovh.net, and it has an IPv4 address of 51.254.200.153. I'll need this later.
See that link next to Your VPS name is: that looks really clickable? When I click on it, my browser looks like this.

This happens because there's no website being hosted on the VPS yet. What I'm supposed to do with that link is log in via SSH.
$ ssh ubuntu@vps-4f0acab8.vps.ovh.netThe desert tortoise API uses Go, which means I can compile it into an executable binary file and run that on my VPS.
$ go build main.go
$ scp ./main ubuntu@vps-4f0acab8.vps.ovh.net:~/mainscp stands for “secure copy”, it's a version of the cp command that uses SSH to move files to or from remote computers. The command we're running here copies ./main from my laptop over to ~/main on my VPS.
I can now see the main file on my VPS:
ubuntu@vps-4f0acab8:~$ ls
mainFrom now on, any prompt that starts with ubuntu@vps-4f0acab8:~$ is going to represent the VPS, and ones that start with just $ will be my laptop.
Great, now I can run it!
ubuntu@vps-4f0acab8:~$ ./main
-bash: ./main: cannot execute binary file: Exec format errorOh…
When I compiled main.go on my laptop, the Go compiler read the code and compiled it into a binary file that could execute on my laptop. I'm using an Apple MacBook Pro with an M3 chip. The M-series chips in Apple laptops are what's called “ARM processors.” The VPS I've bought has an Intel processor, which can't run programs compiled for ARM.
Additionally, Apple devices use a format for their executable files called Mach-O. These won't run on Linux, because Linux uses a format called ELF. The details of these formats aren't important, all that's important is that we have a Mach-O binary using ARM instructions but we need an ELF binary with Intel instructions inside of it.
I'll recompile the desert tortoise API using some flags that tell it to target an Intel processor running Linux.
$ GOOS=linux GOARCH=amd64 go build main.goGetting no output from this command feels disconcerting, I'm going to check it did the right thing by using the file command.
$ go build main.go
$ file main
main: Mach-O 64-bit executable arm64
$ GOOS=linux GOARCH=amd64 go build main.go
$ file main
main: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=f78b7b31ebbd8c47e97a3d974a0e2f2557f22e3d, with debug_info, not strippedThe important parts in there are Mach-O / arm64, and ELF / x86-64. We haven't mentioned x86-64 or amd64 in this post yet, but they're ways of referring to the instruction set that Intel processors use. arm64 is, as you might expect, what ARM calls theirs. Yes, this is all deeply confusing. No, you don't have to remember it.
If you're worried these companies are humourless monoliths that can't help but give things weird inscrutable names, ARM has a smaller instruction set that they use to produce smaller binaries and they called it “thumb.”
Now let's get the fixed binary onto the VPS and try running it again:
$ scp main ubuntu@vps-4f0acab8.vps.ovh.net:~/main
$ ssh ubuntu@vps-4f0acab8.vps.ovh.net
ubuntu@vps-4f0acab8:~$ ./main
Error loading tortoise facts: Error reading file: open data/facts.json: no such file or directory
API is running on http://localhost:5000Whoops, I also need to send up the file that the tortoise API reads its facts from, which is in data/facts.json:
$ scp -r data ubuntu@vps-4f0acab8.vps.ovh.net:~/dataThe -r flag to scp stands for “recursive” and is required when you want to copy a directory.
ubuntu@vps-4f0acab8:~$ ./main
API is running on http://localhost:5000And now I can access https://vps-4f0acab8.vps.ovh.net:5000/random in my browser to get a random desert tortoise fact.

However, there are 3 problems we need to address:
I'm going to solve these problems in order.
When I visit a website in my browser, be it ngrok.com or vps-4f0acab8.vps.ovh.net, my laptop needs to convert these confusing human words into nice, sensible numbers. The underlying network protocol that gets my data from A to B has no understanding of ngrok.com. It needs to see something like 3.125.209.94.
ngrok.com is a “domain name” and 3.125.209.94 is an “IP address.” To convert between the two, our computers talk to a Domain Name System (DNS) server. If I wanted deserttortoisefacts.club to “resolve” to my VPSs IP address of 51.254.200.153, I need to buy a domain name from a “domain registrar.”
Similar to VPS providers, there are a lot of domain registrars out there. I tend to use https://namecheap.com. You can use the search box on their home page to start looking for the perfect domain for your app. Lucky for me, the perfect domain for my desert tortoise API was available.

I can add this to my cart and checkout.
The price of domain names is not fixed. You will find that shorter domain names are more expensive. A lot of domain names are also taken, so it can be a pain trying to find one you like. Common tactics involve removing vowels, e.g. flickr.com, or putting a short word in front like getsentry.com.
Domain registrars tend to have lots of extra services they provide, like email and web hosting, but I just need the domain name so I went through the checkout process with the defaults and nothing extra.

If you've chosen a registrar other than Namecheap, something to keep an eye out for is “domain privacy.” In some places it might be called “WHOIS privacy.” This is because the registrar will ask you for your name and address, and if you don't have WHOIS privacy that name and address will become publicly available. For an example of how this looks, here's my personal domain.
The last thing I need to do in my domain registrar is configure deserttortoisefact.club to point to my VPSs IP address, 51.254.200.153.
In my dashboard I click manage next to the domain.

Then I click “Advanced DNS”:

I click the delete icon next to the existing entries:

Then I click add new record and enter the following:

Then I click save all changes and I'm done!
A lot happened in the above section, but ultimately the outcome we want is for deserttortoisefacts.club to “resolve” to the IP address 51.254.200.153.
We can verify that it does in the command line, using a tool called “dig.”
$ dig +short deserttortoisefacts.clubAt first, this command will likely return nothing at all. DNS changes take time to “propagate” throughout the domain name system. You should see output like this within 10 minutes or so.
$ dig +short deserttortoisefacts.club
51.254.200.153Now visiting http://deserttortoisefacts.club:5000/random in my browser gives me a random desert tortoise fact!
In day-to-day web browsing we almost never see this colon and then a number syntax in our URLs. As a developer, you're likely used to it from hitting http://localhost:3000 and whatnot. You may even know that this number is called a “port number.”
By default, browsers will send an http:// request to port 80, and an https:// request to port 443. The port number is omitted in the URL because these are well-known ports. So to remove the need for :5000 in the URL, all we need to do is use port 80 for our API.
ubuntu@vps-4f0acab8:~$ ./main --port 80
API is running on http://localhost:80
2025/10/01 09:31:12 listen tcp :80: bind: permission deniedWe're not allowed to bind to port 80 unless we are the root user. This is true of all ports below 1024. We could run sudo ./main -port 80 but that would be a really bad idea.
Running the application as root would give it the ability to do anything to my VPS. Bind to port 80, sure, but also: delete any file, run a new service, create and delete users, anything. Yes, the code doesn't do that, but it is easier than you might think to introduce a bug that allows hackers to get access to a CLI prompt on your VPS. Better safe than sorry.
Instead of running my application as root, I'm going to run another application as root and have that application forward requests to mine. This is a common tactic, and is called “reverse proxying.” There are a few good reverse proxies to choose from, but I'm going to go with Caddy.
It might feel like we've just moved the problem to another piece of software, and we have. But you likely aren't going to be changing or updating Caddy as often as your own code, and the Caddy source code is deployed in thousands of places around the world. It has a lot of eyes on it, a singular purpose, and while these things aren't a guarantee of safety, they make me feel better.
To install it on my VPS I follow the instructions for Ubuntu on the Caddy website:
ubuntu@vps-4f0acab8:~$ sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
ubuntu@vps-4f0acab8:~$ curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
ubuntu@vps-4f0acab8:~$ curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
ubuntu@vps-4f0acab8:~$ chmod o+r /usr/share/keyrings/caddy-stable-archive-keyring.gpg
ubuntu@vps-4f0acab8:~$ chmod o+r /etc/apt/sources.list.d/caddy-stable.list
ubuntu@vps-4f0acab8:~$ sudo apt update
ubuntu@vps-4f0acab8:~$ sudo apt install caddThis does a lot for me behind the scenes, including setting up Caddy to run in the background. It will even come back up automatically if I restart the VPS. It does this through systemd.
systemd is what Ubuntu uses for managing long-running programs. You interact with it through a command called systemctl. I can check the status of Caddy like so:
ubuntu@vps-4f0acab8:~$ systemctl status caddy
● caddy.service - Caddy
Loaded: loaded (/usr/lib/systemd/system/caddy.service; enabled; preset: enabled)
Active: active (running) since Wed 2025-10-01 10:14:03 UTC; 1min 33s ago
Invocation: bba73fa840424a1cb1e2bd6cb464e6e7
Docs: https://caddyserver.com/docs/
Main PID: 61533 (caddy)
Tasks: 9 (limit: 9250)
Memory: 11.1M (peak: 11.6M)
CPU: 366ms
CGroup: /system.slice/caddy.service
└─61533 /usr/bin/caddy run --environ --config /etc/caddy/CaddyfileIt's not important to understand all of that output, but I can see that it is active (running) and at the very end it tells me the command that was run was /usr/bin/caddy run --environ --config /etc/caddy/Caddyfile. It's not shown above, but the installation also created a user called caddy which has restricted access to my VPS and is being used to run the Caddy server. This is a nice extra layer of defence. If an attacker did gain access, they would do so as the user caddy, who cannot read our files.
Wait, I thought you said you needed to be root to bind to port 80?
I did, and you do. But you can also bind to ports lower than 1024 on Linux systems by using something called “capabilities.” Introduced in Linux 2.2 in 1999, the capabilities system moves away from the all-or-nothing approach of having a root user, and allows you to run a program with more granular permissions. If you open up /usr/lib/systemd/system/caddy.service, you'll see a line that says AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE. This allows Caddy to bind to 80 and 443 without being root.
The systemctl output also tells me that my Caddy configuration lives at /etc/caddy/Caddyfile. By default it contains a bunch of commented out configuration, and I encourage you to go and read it if you're following along, but for the purposes of this guide I'm going to replace it all with 3 lines.
ubuntu@vps-4f0acab8:~$ sudo tee /etc/caddy/Caddyfile > /dev/null <<EOF
deserttortoisefacts.club {
reverse_proxy localhost:8080
}
EOFThis tells Caddy that my domain is deserttortoisefacts.club, and I want to forward requests to localhost:8080. After making this change, I need to restart Caddy for it to load the new configuration, and start my ./main binary on port 8080:
ubuntu@vps-4f0acab8:~$ sudo systemctl restart caddy
ubuntu@vps-4f0acab8:~$ ./main --port 8080Now visiting https://deserttortoisefacts.club/random in my browser works, and the “not secure” text is gone!

This is because Caddy took care of the process of setting up HTTPS for me. When I changed the configuration to use my domain name, Caddy sent a request to a service called Let's Encrypt to request a “TLS certificate” on our behalf. I can use a tool called journalctl to see the logs from Caddy and verify that this happened.
ubuntu@vps-4f0acab8:~$ journalctl -u caddy | grep "obtaining certificate"
Oct 01 10:37:10 vps-4f0acab8 caddy[61801]: {"level":"info","ts":1759315030.2754638,"logger":"tls.obtain","msg":"obtaining certificate","identifier":"deserttortoisefacts.club"}This certificate is what contains the information required to perform encryption on data sent to and from the server, and if you don't get one from a reputable source like Let's Encrypt, you won't get rid of the “not secure” message in browsers. There are even some domain suffixes, e.g. .dev, that won't let you serve any requests without a valid TLS certificate.
The last thing to address is running the desert tortoise API in the background. We saw in the last section that Caddy uses systemd to run in the background, and I'm going to do exactly the same.
The systemd configuration format has a dizzying array of options but I can get away with only using a few of them.
ubuntu@vps-4f0acab8:~$ sudo tee /etc/systemd/system/deserttortoisefacts-club.service <<EOF
[Unit]
Description=Desert Tortoise Facts API
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=tortoise
Group=tortoise
WorkingDirectory=/home/tortoise
ExecStart=/home/tortoise/main --port 8080
Restart=on-failure
RestartSec=5s
[Install]
WantedBy=multi-user.target
EOFI'll go through what each section does at a high level:
[Unit] gives my service a name and says it should only run once the network is online. This is relevant when the VPS reboots, without this it might try to bind to port 8080 before there's a network to bind to at all.[Service] defines what I'm running, in this case the main binary. I'm configuring it to run as a user called tortoise, which I'll create in a moment. It's good practice for all services you run on a VPS to have their own users, to keep them isolated from each other.[Install] I'll be honest with you, I had to look this one up. It's a habit to add it but I never remember why. In short, without this the service won't start. It adds the service into the dependency graph that systemd creates in order to know what to start and when.As promised, I create the user tortoise, and move the main binary to its home directory.
ubuntu@vps-4f0acab8:~$ sudo useradd -m -d /home/tortoise tortoise
ubuntu@vps-4f0acab8:~$ sudo mv ./main /home/tortoiseThen I tell systemd to start my new service and take a look at the logs:
ubuntu@vps-4f0acab8:~$ sudo systemctl start deserttortoisefacts-club
ubuntu@vps-4f0acab8:~$ journalctl -fu deserttortoisefacts-club
Oct 02 13:43:41 vps-4f0acab8 systemd[1]: Started deserttortoisefacts-club.service - Desert Tortoise Facts API.
Oct 02 13:43:41 vps-4f0acab8 main[81833]: Error loading tortoise facts: Error reading file: open data/facts.json: no such file or directory
Oct 02 13:43:41 vps-4f0acab8 main[81833]: API is running on http://localhost:8080Whoops, forgot the data file again.
ubuntu@vps-4f0acab8:~$ sudo mv data /home/tortoise
ubuntu@vps-4f0acab8:~$ sudo systemctl restart deserttortoisefacts-club
ubuntu@vps-4f0acab8:~$ journalctl -fu deserttortoisefacts-club
...[snip]...
Oct 02 13:44:29 vps-4f0acab8 systemd[1]: Started deserttortoisefacts-club.service - Desert Tortoise Facts API.
Oct 02 13:44:29 vps-4f0acab8 main[81864]: API is running on http://localhost:8080The last thing to do is that while I've started the service with systemd, I need to enable it in order for the service to be brought up automatically when the VPS restarts.
ubuntu@vps-4f0acab8:~$ sudo systemctl enable deserttortoisefacts-club.service`
Created symlink '/etc/systemd/system/multi-user.target.wants/deserttortoisefacts-club.service' → '/etc/systemd/system/deserttortoisefacts-club.service'.And now the service no longer relies on my SSH connection staying open! I can disconnect from the VPS and continue requesting https://deserttortoisefacts.club/random to get those sweet, sweet facts.
We've covered self-hosting a simple Go binary on a VPS using tools freely available to us. You could host an app this way and, depending on what your service is doing, likely serve thousands of users without difficulty.
ngrok slots into this setup by replacing Caddy, and with it we get access to features like:
Let's set it up and see how it looks.
At time of writing, ngrok does not currently support “apex domains.” deserttortoisefacts.club is an apex domain, which just means that it only has a single dot in it. In order to use ngrok, we need to create a “CNAME” record, which can only be created for domains with 2 or more dots in them.
It is a bit lame, and it's something we are working towards supporting but
we're not there yet. I could have deceived you by starting this post using a
domain like api.deserttortoisefacts.club but I'd rather be honest with you.
When we support apex domains, I will come back to update this post.
If you haven't already, you'll need to create an account at https://ngrok.com. After doing so, you'll be dropped into a dashboard that looks like this.

To start the domain setup, I click on the Domains link highlighted in the above screenshot.

Then from here I click on + New Domain.

I enter the domain I'd like to use ngrok with and click Continue.

Now I need to go over to https://namecheap.com and create a DNS record as shown above.

Then I click Check Status back in the ngrok dashboard until I see this:

It can take a few minutes for ngrok to recognise that your domain is set up properly, and this is because DNS changes take time to propagate through the global network of domain servers.
Once you see Your CNAME Record is set up properly the domain is ready to be used!
Next I need to run ngrok on my VPS and configure it to route traffic from api.deserttortoisefacts.club to port 8080. I follow the setup instructions at in the dashboard to install ngrok, then create a configuration file defining my endpoint.
ubuntu@vps-4f0acab8:~$ curl -sSL https://ngrok-agent.s3.amazonaws.com/ngrok.asc \
| sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null \
&& echo "deb https://ngrok-agent.s3.amazonaws.com bookworm main" \
| sudo tee /etc/apt/sources.list.d/ngrok.list \
&& sudo apt update \
&& sudo apt install ngrok
ubuntu@vps-4f0acab8:~$ sudo mkdir /etc/ngrok
ubuntu@vps-4f0acab8:~$ sudo tee /etc/ngrok/config.yml <<EOF
version: "3"
agent:
authtoken: $YOUR_AUTHTOKEN
endpoints:
- name: deserttortoisefacts.club
url: https://api.deserttortoisefacts.club
upstream:
url: 8080
EOFFilling in $YOUR_AUTHTOKEN with the auth token value I found at in the authtoken section of the dashboard.
With this config file created I can create a systemd unit with the ngrok CLI:
ubuntu@vps-4f0acab8:~$ sudo ngrok service install --config /etc/ngrok/config.yml
ubuntu@vps-4f0acab8:~$ sudo systemctl start ngrok && sudo systemctl enable ngrok
ubuntu@vps-4f0acab8:~$ journalctl -fu ngrok
Oct 14 13:41:55 vps-4f0acab8 systemd[1]: Started ngrok.service - ngrok secure tunnel client.
...[snip]...And now I can visit https://api.deserttortoisefacts.club/random and get a fact!
After sending a request to the new endpoint I can take a look at those requests in Traffic Inspector:

Clicking into an individual request gives more detail. Here you can even see the contents of the response, which is not enabled by default. I've enabled it on my account to demonstrate it, you read up on how to enable it in the docs.

As the demand for tortoise facts grows, I might find myself in a position where I can't service all of that demand from a single VPS. Or maybe the popularity of desert tortoise facts explodes overnight in the US and I want to give these users better latency by having another VPS close to them.
Endpoint Pooling makes this really easy.
The first thing I need to do is update my ngrok configuration to enable Endpoint Pooling on this domain. I do this by modifying the endpoints block in my /etc/ngrok/config.yml file to add pooling_enabled: true.
endpoints:
- name: deserttortoisefacts.club
url: https://api.deserttortoisefacts.club
upstream:
url: 8080
pooling_enabled: trueThen restart ngrok:
ubuntu@vps-4f0acab8:~$ sudo systemctl restart ngrokThat change alone doesn't change anything about the endpoint, but what it allows us to do now is to set up another VPS—install ngrok, and configure the exact same endpoint. After doing this, ngrok will load balance requests between the two VPSs, favouring the VPS closest to where the request originated. It will also detect when one of the endpoints is down—if you restart a VPS, for example—and route all requests to the other one automatically.
It doesn't even have to be another VPS, you can run another copy of main on the same VPS to take advantage of the multiple CPU cores the VPS has.
You can read more about Global Server Load Balancing and Endpoint Pooling in the ngrok documentation.
Tortoise facts are for people, not robots. I don't want automated scrapers gobbling up all of these facts. One of the cool superpowers ngrok gives you is called “IP Intelligence.”
Attached to every request that comes in to ngrok is a suite of information based on the IP address of the client. If I make a curl https://api.deserttortoisefacts.club/random request from my VPS, here's what I see in the IP Intelligence section of the request in Traffic Inspector:

You can read up on the details of all of the information that's available in the docs, but a couple of highlights from the screenshot above:
Across the internet there are many lists maintained of IP addresses and what they're used for, or who they belong to. For example, AWS publishes all of their IP ranges. ngrok ingests these lists and uses them to create categories of IP addresses. You can read about all of them in the docs.
I'm going to use these IP categories to block “bad” IPs. These are IP addresses that are known to come from data centers, cloud hosting providers, and colocation facilities. So they're not going to be residential IP addresses.
I open up /etc/ngrok/config.yml again and add the following:
endpoints:
- name: deserttortoisefacts.club
url: https://api.deserttortoisefacts.club
upstream:
url: 8080
pooling_enabled: true
traffic_policy:
on_http_request:
- expressions:
- "('public.brianhama.bad-asn-list' in conn.client_ip.categories)"
actions:
- type: denyWhat you're seeing here is Traffic Policy. This is ngrok's configuration language for manipulating traffic. What this block of Traffic Policy is saying is: “if the client IP address is in the bad-asn-list, deny it.” Then anything not denied automatically falls through to the upstream URL, in our case http://localhost:8080.
I restart ngrok to pick up this configuration change:
ubuntu@vps-4f0acab8:~$ sudo systemctl restart ngrokNow the facts are still accessible through my browser from my home, but if I try to curl it from the VPS, I get blocked:
ubuntu@vps-4f0acab8:~$ curl -o - -I https://api.deserttortoisefacts.club/random
HTTP/2 403
date: Tue, 14 Oct 2025 15:07:10 GMTThis is the Traffic Policy in action! There are dozens more rule types and a whole language for crafting expressions, you can read all about it here.
I hope the first half of this guide gives you the confidence to take control of the hosting of your ideas, and I hope the second half of this guide shows you a small slice of the value that ngrok can provide you.
We've got lots of guides on how ngrok can solve a variety of problems for you, videos on how to use ngrok, and docs for everything else.