The PAYNEful Portfolio – Redirecting Domain Names to Mask a Different Domain Path Using NGINX

Note: I wrote this post probably about three or four years ago, I’ve been sat on it for a while because a) I don’t know if it’s all that interesting and b) it might reflect poorly on my server admin skills. In the interest of transparency (and because I’m committed to this one blog a week malarkey) it finally gets published today.


Recently I have dabbling in some server fun, which means I’ve been engaging in activity that would make proper server admins cry with absolute rage.

Let’s just say I have a website that makes use of RESTful URIs. We then have a system where we want to add a domain name but point it at a specific existing URI. For example…

http://www.main-website.com/restful/uri/long-string-of-parameters

…becomes the following…

http://new-domain.com/long-string-of-parameters

I could foresee a potential headache, as in my limited experience of tying domains to Linux servers you normally have to add stuff to sites-available and link it in sites-enabled. That could become an unscalable nightmare since we are looking at adding domains on a regular basis just to mask existing URLs!

Poking around NGINX and the sites-available folder, I found a file called “catch-all” that contained the following…

server {
     return 404;
}

Given that the only other files were for the main site’s config, it seemed obvious that this was a file to deal with “any other traffic”, which was interesting.

In the NGINX config I also noted that there were several “include” statements like the following:

include config-folder/domain-name-here.com/directory/*;

This include seemed to be saying “include any config values from files in this directory”.

Putting this new information together, I wondered if I could pass in a config file with directives for the domains we were looking to add. If I could, this should be easy enough to automate as it would just be a case of generating a new .conf file every time we add a domain!

First, I pointed a test domain that we wanted to add to the system at the remote server’s ip in the domain registrar panel. This caused it to generate the default NGINX 404 page when accessing that domain, which made sense.

NGINX 404
Quoth the server, 404.

In the catch-all config file I added the following line at the top (note this was above the server directive, not inside the curly braces with the 404!):

include /path/to/directory/where/we/will/add/redirects/*;
server {
     return 404;
}

This would mean that for any other traffic than the main domain, the config would check that directory for config files, and then just output a 404 if it couldn’t find or match anything.

Then, in a new file in the freshly-created “redirects”1 directory I referenced in the catch-all .conf I added the following to a file:

server {
     listen 80;
     server_name new-domain.com;
     location / {
          proxy_pass http://www.main-website.com/restful/uri/;
     }
}

Actually, I tell a lie – I originally just put a redirect to the main domain in there and, once I’d restarted NGINX, found that it worked! I only found proxy_pass was a thing after Googling how to “mask” a URL with NGINX.

Please note I originally wanted to place these redirect files in the secure NGINX directory but that threw up a small permissions issue that I will explain in a bit – instead, the folder is placed above the public root but in a location accessible by scripts (PHP, cli, etc.). Please do feel free to tell me how bad an idea this is in the comments, and most importantly why! I need to learn.

So I now had one domain masking another. Using PHP scripts I experimented using exec() to create and populate a file with the details outlined above, and it worked, albeit after reloading the NGINX config each time to pick up on the changes. This wasn’t a problem – the domain redirecting process didn’t have to be instantaneous and I could just set a cronjob to run every hour and reload the config2.

To reload the NGINX config is a one-liner…

service nginx reload

…and doesn’t require NGINX to restart, so the website never has to disappear while everything reloads. I stuck it in a .sh file, gave it executable permissions (chmod +x) and tried running it. Surprise! It didn’t work.

This is because you need to run it with “sudo” for elevated permissions. Bugger. This is pretty much the problem I was having earlier when trying to generate the NGINX config files within the NGINX directory.

You can automate sudo to run on the command line, but in almost every example given it requires either echoing out the root username and password (which doesn’t work if you’re doing it from a PHP script using PHP 5.6+ which, incidentally, is a BAD IDEA) or you can have the username and password on the server in a file and read it in, which is a BAD IDEA.

After much Googling, I found the following article on StackOverflow, which explained that I could just make an exception for that file to run without a password.

I added the line to /etc/sudoers and promptly broke sudo on the server3 because of a syntax error.

Shit hitting a fan
An apt visual metaphor.

Before I proceed any further, let me reiterate: DO NOT EDIT THE SUDOERS FILE FOR ANY REASON, NO, NOT EVEN FOR THAT REASON. NEVER EVER TOUCH IT!

I’ve touched the sudoers file and broken it. Erm. What do I do now?

*Sighs*

A lot of solutions to fix the sudoer file involve using pkexec or rebooting the server in recovery mode. For me, the former wasn’t installed (the cruel irony being that I needed sudo to install it) and the latter just completely locked me out as we rely on SSH keys for everything4.

How I did fix it was to power down the box, reset the root password for the server using the hosting panel, boot it back up, “su” into the root account using the new password and then run “visudo” to remove the offending line that broke sudo in the first place. I resolved after that point to never, ever touch the sudoers file again.

So how do we allow NGINX to run without a password?

How to add the exception line? That’s where sudoers.d comes in. Much the same way I could extend the NGINX config with other config files, you can do the same with the sudoers file.

I created a file called “scripts_permissions” (you can call it anything as long as it has neither a “.” or “~” in it) and added the following:

#Allow nginx to reload config without a password
[username] ALL=(root) NOPASSWD: /usr/sbin/service nginx reload

Obviously you need to replace [username] with the user that needs access. This is when I found out that if you create a crontab file, the system will run those crontab functions as the user who created the file (not root)! Sound simple, but it can be confusing as I was putting the wrong user in the rule (I assumed it had to be the root account).

This now means that I could run “sudo service nginx reload” on the command lineand it wouldn’t ask for a password. I set the cronjob to run once an hour during certain times on weekdays and hey presto! It now picks up on any new domain rules that I add to /path/to/directory/where/we/will/add/redirects/ (obviously with a real filepath), albeit on an hourly refresh.

The final point is to generate the redirects file containing the proxy_pass rules and move it to our redirects folder – this is simple enough to do in PHP using the exec() function or, if you want to be extra safe, dump the PHP-generated files to a directory and then have the system move the files to the sensitive directories so scrubby PHP isn’t soiling your file system with its treacherous clumsy fingers.

Can you just use a wildcard DNS entry rather than separate subdomain config entries each time?

You can! You can just point a DNS entry for any subdomain at a server and these rules will still allow certain subdomains to proxy_pass specific URLs.

There’s one problem (one which I don’t know the cause of) – say you have the following subdomains:

  • test1.otherdomain.com – proxy_pass set to http://maindomain.com/url1
  • test2.otherdomain.com – proxy_pass set to http://maindomain.com/url2

If you have a wildcard DNS entry and then try “test3.otherdomain.com” in your address bar with the above two subdomains in place using the setup outlined here, it will bafflingly point at one of the proxy_pass rules despite there only being rules for “test1” and “test2”. I have a feeling it points at the last rule that gets loaded, but I haven’t tested this.

The solution I found was to create another, separate rule just for the domain as a whole:

server {
     listen 80;
     server_name *.otherdomain.com;
     return 404;
}

This blocks all unwanted wildcard traffic and still allows the other rules to work for proxy_pass.

Erm, one last thing. I’ve added a bunch of NGINX config files and now my app is broken in several places. Like, Google maps won’t load and stuff.

Ah, I had this problem too. We thought we’d got hacked or blocked by Google or something. As it turns out, basically we added a config file and the server decided that it should prioritise it over the main configuration file. Our website was basically masking all traffic via a different domain, which was really confusing the app.

It’s a really, really simple fix. You basically need to add ‘default_server’ to your main config so that NGINX knows which of the myriad of config files is the one it should prioritise.


  1. It should technically be called “masks” but I named the folder “redirects” in case we ever wanted to add other redirects to it.
  2. Yes, this is probably a dangerous idea on the basis that, if one of those generated NGINX config files I am adding to the “redirects” directory is malformed the entire website will throw up an NGINX error when reloading the config. The trick is to validate the generated config files and not generate a broken one…
  3. I’d like to point out that this was on a development environment and not a production server because I try not to be irresponsible enough to run dangerous server commands on a live environment without proper testing!
  4. On the off-chance you don’t know what this means, you can share a key pair betweens two computers so they don’t prompt for username/password every time you connect to one machine with another. The other benefit is that you can block server access to anyone who doesn’t have keys, which helps with security.

Post by | February 9, 2020 at 12:00 pm | Articles, Portfolio and Work | No comment

Tags: , , , , ,