In the last blog, we talked about what gRPC is and how to use it to build a simple service. In this blog, we will continue our journey of exploring gRPC by covering topics like client/service streaming (unidirectional and bi-directional) and “language interpolation”, where the client and server are written in different languages but communicate with each other using protobuf.
We will keep the service that is written by Kotlin in the previous blog but will change client implementation to Golang in this blog. So if you are not familiar with Golang, I would recommend you go through the official quick tutorial to get a basic understanding of it.
Some preparation is needed before we start diving into coding.
go get google.golang.org/protobuf/cmd/protoc-gen-go
and go get google.golang.org/grpc/cmd/protoc-gen-go-grpc
PATH
environment variable so that the protoc
compiler can see these plugins, export PATH="$PATH:$(go env GOPATH)/bin"
. It would be better to put it in your .bash_profile
or .zshrc
file.After everything mentioned above is installed, now let’s set up our go project. Create an empty folder name it whatever you want, I named it go-grpc-demo
, then cd
to the newly created folder. Inside it, let's initialize our go project using the go module by running the following command:
go mod init <module name>
You can choose any name for the module name, I named it <my name>/go-grpc
. After that, you should see a go.mod
file generated for you. Inspecting the file, you should see a module <module name>
at the top, followed by the go version.
Next, let’s install the dependencies we need for our project. First install go protobuf. Run the following command:
go get github.com/golang/protobuf/proto
Then, install the grpc-go by running the command:
go get google.golang.org/grpc
Now if you inspect the go.mod
file, you should see a require
section which lists all the dependencies we just installed.
Next is to copy the blog.proto
file we created in the previous blog to the current folder. But before that, we have to edit it to add one line: option go_package="generated/blog";
to make it work with our go project. You can put this line after or before the existing two options. Now create a folder called protos
inside the folder where go.mod
sits, then copy the blog.proto
file and paste it into the newly created folder. The last step is to create a script file that contains the command to generate proto codes for us. We will use this a lot. I named it generate_ptoto_codes.sh
with the following content:
if [ -d "./generated" ]
then
rm -rf ./generated/*
else
mkdir -p ./generated
fi
protoc -I protos/ protos/*.proto --go_out=plugins=grpc:./
Now everything is set up and ready to go, let’s get into coding!
Let’s implement the simple client/server method we defined last time. Run the script file to generate go grpc code first, then create a new folder called client
in which we will place our client code. Up until now, your project structure should look like this:
Inside client
folder, create a file named blog.go
or whatever name you want. Write an initialize
function, which we will reuse a lot later. The function should look like below:
It returns a connection and blog client (the required packages should be auto imported by your IDE or code editor if you have golang extensions installed). Now let’s implement the unary example we defined last time. Run the generate_ptoto_codes.sh
script to generate proto code, then put this piece of code after the initialize
method:
Looks pretty simple. Nothing fancy here. To run it, we have to create a main.go
file under the root folder (where the go.mod
file sits) then put in the following piece of code:
Make sure the server-side code is running by starting the server written in kotlin
from the last blog. You can run the client-side code by clicking the green triangle button next to main
function in Goland
, or in terminal, type go run main.go
to run it if you prefer that. Wait a few seconds, then you should see the response coming back as intended.
To start the client streaming, we need to update our blog.proto
file to add the corresponding method definition and add the following code into the blog.proto
files in both the client (golang) side and the server (kotlin) side:
Pay attention to the stream
keyword used in the method parameter, this is the magic word that will tell the protobuf
complier to generate necessary codes for us to do streaming. Now on the server-side, make sure to click the generateProto
gradle task to generate new proto code. On client-side, run the generate_ptoto_codes.sh
script again to generate new proto code.
Now we are ready to write code for streaming. On the server-side, before we override the new saveBlogs
method, we have to make a few changes to our class variables and data generation method. The changes are:
Then, let’s override the saveBlogs
method like below:
Notice the parameter is the type of Flow
which is a kotlin coroutine concept. You can check the kotlin documentation for more details. This simple method will just simply collect all save blog requests and transform them into blog data to save, then return a response of a total current number of blogs. Nothing crazy here. Restart the server. Then switch to the client-side (client/blog.go
), and put the following code:
You can see that the part before the first log
statement is the same as the Unary
example. The difference starts after it. The SaveBlogs
returns a BlogService_SaveBlogsClient
(stream) and an error
(err). The BlogService_SaveBlogsClient
is an interface that has the corresponding methods Send
and CloseAndRecv
for us to do streaming. The for
loop is the streaming part which sends 10 blogs requests. CloseAndRecv
will signal the server-client stream when it is done and ready to receive a server response. As you can see, it's not very complicated. Then, go to the main.go
file, comment out the unary example function call, and put client streaming call after it.
Run it again, and you should see stream log outputs.
So far, everything we have done is easy to understand. However, the server stream is not something we see quite often. Most people have never used it. It might not be that straightforward to understand for some people, but don’t worry. We will look at it using a simple example and update the blog.proto
file in both client and server sides by adding the following:
Notice that the stream
keyword got moved to the returns
section. Also, the request is empty in this example. We don't need to pass any data to the server this time. We will simply just ask the server to list all current blogs for us using stream. Next, run the corresponding gradle task and shell script to generate new proto code for client and server. On the server side, override listBlogs
method like below:
You can see that the return type now becomes type Flow
. Also, we purposely delayed 100ms for each blog return so that you could see the stream response better. You can adjust the time to any value you want. On the client-side, type the following code:
You can see that it’s quite similar to the client stream. In the infinite loop, we just keep calling the Rece
method on the BlogService_ListBlogsClient
, then check if an EOF
error is returned, which signals the end of the stream from the server. This simple example should be easy to understand. In the main.go
file, comment out all code in the main
function and call client.BlogServerStreamExample()
. Run it again and the server stream log output should be printed for you. Quite exciting right?
After going through client/server stream, I hope you have grasped some basic understanding of gRPC stream. This will help you understand the final part of this blog, bidirectional stream. It sounds super cool, but it is also probably the most uncommon scenario that most people will see and use. Most people have probably never used anything similar before. Let’s start by updating our blog.proto
file by adding the following:
In this example, the client will keep sending requests to save blogs while the server will keep processing them and return the author with the current most number of blogs back to the client until the client signals the end of sending. The server will then stop returning responses. Make sure to generate new proto code. As you can see, it’s a very basic bidirectional stream example. After you update proto files in both places and generate new codes for both, and then at server-side, we need to make two more changes to class member variables and init code:
Now it’s time to override the new getAuthorWithMostBlogsOnSave
method:
What this method does is a new response flow (response stream) is created based on request flow (request stream. Inside the response flow builder, the request flow gets mapped to blogs first, then they are saved into the blogCollection
variable after that authorCount
gets updated, which in turn will be used to construct a new response to the client. As new requests keep flowing in as a stream, new responses will keep flowing out as a stream as well. The server code now is finished, so let's switch to client-side. Add the following to blog.go
:
At first, a bunch of requests get pre-built, and the stream initialization looks pretty much the same as before. What’s different is that it now creates a separate go routine to keep receiving responses from the server. While in the main routine, the client just keeps sending requests. Notice how the done
channel variable is used between these two go routines to control program execution. Once the go routine that receives responses finishes processing the stream, it will close the done
channel that will cause <-done
to unblock and ends the entire client program. Now try running this new method in main.go
and you should see interesting results get printed. You could also try to play with the client/server sleep time to see what different results it will introduce.
Congratulations! If you read to this point, you have gone through all the client/server communication patterns gRPC has brought to you, and you have also experienced the 'interpolation'
part of gRPC that client/server communicates with each other based on protobuf message written in different languages. It's a lot to go through, especially if you are not used to the streaming concept. Hopefully, after you finish reading and practicing along with this blog, you will have a better understanding of both gRPC and streaming.
The final code for the blog can be found here.
Callibrity has locations in Cincinnati and Columbus, Ohio, with a national reach. Callibrity meets its clients wherever they are on their digital journey, specializing in software engineering, digital transformation, cloud strategy, and data-driven insights. Callibrity provides subject matter expertise and solves complex problems with simple solutions for ever-changing business models. More information can be found at Callibrity.com.