As written in the last post my primary interrest is writing web applications in C with Kore. The post shows how to create a small application and run it in development mode. In production there are two methods to run Kore. The first is to run Kore standalone. That means Kore is directly bound to a port on a public IP address. It is no problem to configure Kore that way because it is intended. But I prefer another way. I want Kore to be a background application to which the requests are routed through a proxy. The purpose of this proxy is to terminate SSL/TLS, do caching and give me the ability to serve different web application from a single IP address.

Turn off OpenSSL for Kore

The first problem is that Kore is by default compiled with OpenSSL support. That means no HTTP requests are accepted. That’s ok in principle, but doesn’t work well for my setup. The proxy, in my case nginx, shall handle SSL/TLS, but Kore already does that. Double encryption on the same system makes no sense to me. Fortunately Kore’s Makefile supports building without OpenSSL. Just pass BENCHMARK=1 to make:

$ make BENCHMARK=1

That’s it, from now on Kore delivers content unencrypted via HTTP.

Configuring Kore

The configuration of our Kore application becomes also a bit simpler. We don’t have to set options for the TLS certificate and private key as nginx should do the job of terminating encryption. The Kore application should be bound to a local ip like 127.0.0.1, so that it’s not possible to access the application directly from the internet. It is also nice to chroot the application to a separate folder for security reasons. At the end change the domain configuration the the domain for which the application should answer requests. The finalized configuration file looks like this:

bind 127.0.0.1 8888
chroot /var/run/kore

load ./example.so

domain example.com {
  static / page
}

The preparation of our Kore application is finished. Now let’s start it in production mode:

$ kore -c conf/example.conf

We can now try to send a request to the application to test that it responds correctly:

$ curl --header "Host: example.com" http://127.0.0.1:8888/
Hello world!

It works! Yay!

Virtual host for nginx

After we have got Kore to work the way we want, we go on to configuring nginx. The goal is, too configure nginx so that it redirects non-HTTPS requests to HTTPS. For that we need two virtual hosts. The host that should only redirect requests is pretty easy:

server {
  listen 0.0.0.0:80;
  listen [::]:80;
  server_name example.com;

  location / {
    return 301 https://$host$request_uri;
  }
}

We just define that everything will be redirected with a permanent redirect 301. The host that should accept the actual traffic is a bit more complicated. For easier understanding I omitted special SSL settings. Here’s the HTTPS host:

server {
  listen 0.0.0.0:443;
  listen [::]:443;
  server_name example.com;

  ssl_certificate certs/cert.pem;
  ssl_certificate_key certs/privatekey.pem;
  ssl on;

  location / {
    proxy_pass http://127.0.0.1:8888;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
  }
}

You can see, that we configured nginx to act as a reverse proxy. It passes all requests to our Kore application which is listening on the IP 127.0.0.1 on port 8888. But that doesn’t suffice. Because Kore only accepts HTTP/1.1 we have to explicitly set the HTTP version of the proxied request to HTTP/1.1. The default would be HTTP/1.0 which would raise an error because of using the wrong protocol. Kore would reject such a connection. The last thing that prevents our setup from working nicely is the wrong Host header, that is changed by nginx during proxying. It is important to fix this because of the configuration of Kore, where we defined a virtual host called example.com. With the corrected header everything should work as expected. Let’s try:

$ curl https://example.com/
Hello world!

It works!

I hope this little example helps you to get a feeling that Kore is not just a toy, but can be used to build serious web applications that can run in every environment.

Stay tuned for the next post on sending mail with C and learn one more tool that completes you web stack even further.