Deploy Service with gRPC, Envoy and NGINX

Deploy Service with gRPC, Envoy and NGINX

Are you bored with RESTful API? Take a look at gRPC! gRPC is a high performance RPC framework communicating with Protobuf. With the help of Envoy, an powerful service proxy to load balancing and translation, and NGINX as the front proxy for static files and frontend requests, you can easily host you SPA application with all the benefits of gRPC.

Introduction

The benefit of using gRPC including Multiplexing with HTTP/2, light packet size and fast serialization/deserialization with Protobuf, etc. More discussions can be found on the website from Microsoft

In this article, we will not get into the details about how to write a gRPC backend service, since the official document from gRPC is very detailed. For example, you can take a look at the starter code for JAVA.

Our focus is how to:

  1. Using Envoy proxy as the endpoint for the gRPC service, so that your gRPC service can easily scale. Also, Envoy will act as a gateway to translate from/to gRPC-web request that the browsers understand to/from HTTP/2 gRPC request that the backend services understand.
  2. Utilizing NGINX as the proxy for static files (e.g web pages build by Vue.js) , as well as proxy all the gRPC request to/from Envoy.
  3. How to use gRPC-web in Web to performance remote call.

Envoy

The following image (credit to Envoy and gRPC-Web: a fresh new alternative to REST) shows exactly the role of Envoy as we discussed before.

  1. Load balancing

    Envoy supports multiple types of load balancing which ensures scalability, performance and fault tolerance of you gRPC services.

  2. Translation

    Envoy acts as a proxy between the browser and backend services to translate between gRPC-web and gRPC.

The role of Envoy in a gRPC-Web application

Run Envoy via docker

Assuming the Envoy endpoint is on port 8080 and your gRPC service endpoint is on port 9090.

  1. Create the Dockerfile
FROM envoyproxy/envoy:v1.14.1
COPY envoy.yaml /etc/envoy.yaml
EXPOSE 8080
CMD /usr/local/bin/envoy -c /etc/envoy.yaml
  1. Create the envoy.yaml
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }

static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: my_service
max_grpc_timeout: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.grpc_web
- name: envoy.cors
- name: envoy.router
clusters:
- name: my_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: localhost, port_value: 9090 }}]
  1. Build and Run
docker build . --tag envoy:v1
docker run -d --restart always --net="host" envoy:v1

NGINX

Now that your Envoy proxy is running on port 8080, the service should run on /api, which means we should redirect /api/service/myFunction to service/myFunction.

upstream envoy {

server 127.0.0.1:8080;
keepalive 16;
}


server {

listen 80;

location /api/ {

proxy_http_version 1.1;
proxy_pass http://envoy/;
proxy_set_header Connection "";

}

location / {

root /var/www/dist/;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}

}

As we can see from above, we will proxy pass /api to the envoy upstream and we will host our static website at /

Web

To fully utilize binary serialization/deserialization, using protoc to generate grpc-web instead of grpc-webtext. Also, using dts for type hint is preferred. (or typescript is you prefer Typescript)

protoc -I=. service.proto \
--js_out=import_style=commonjs:proto-frontend \
--grpc-web_out=import_style=commonjs+dts,mode=grpcweb:proto-frontend

It will generate 2 service_pb.js and service_grpc_web_pb.js and 2 corresponding dts file.

The following example demonstrate how simple it is perform a RPC call.

import {ServicePromiseClient} from "./proto/service_grpc_web_pb"
import {MyFucntionRequest} from "@/proto/service_pb";
// initialize your client
const client = new ServicePromiseClient("/api", null, null);
// build your input
const request = new MyFucntionRequest();
request.setName("someName")
// call the RPC and wait for the reply
const reply = await client.myFunction(request, {});
// now reply is the type MyFucntionReply

Wrap it up

When using gRPC, you web client will just call the function as if it is a local function. However, it’s somewhat complicated under the hood.

  1. The web client first call /api/service/myFunction via HTTP Restful-like call,
  2. The NGINX server receives it and rewrite the request url as service/myFunction and forward it to Envoy endpoint on port 8080 using HTTP/1.1
  3. The Envoy proxy receives service/myFunction request, by examining the HTTP header, it’s a gRPC-web request, it translate it to gRPC request format and send the request to the gRPC backend on port 9090 on to perform service with function myFunction using HTTP/2
  4. Your gRPC backend on port 9090 receives myFunction of service and perform the computation, return the result to Envoy proxy
  5. The envoy proxy translate it back to gRPC-web in HTTP/1.1 and send the response to NGINX
  6. The NGINX server forward it to the web client
  7. The web client receive the response
Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×