Samples of golang code and experiments, to explain concepts and to have fun.
- gotchas
- go1.8 features, go1.10 features
- go tools
- pprof, tracing
- channels
- struct embedding
- slice
- go testing (checkout branch gopherworld, and see git commits)
- Handler Tests
- gRPC
You think you've mastered Go in few weeks or days, well these are some interesting things or gotcha's you should be aware of.
- Method call on nil pointer is valid
type gopher struct {
name string
}
func (g *gopher) Name() {
fmt.Println("Gopher")
}
var g *gopher
fmt.Println("content: %v, address: %+p, output: %s", g, g, g.Name()) // content: <nil> address: +0x0, output: Gopher
- When you slice the original slice, and if you change the new sliced data, you would end up changing original slice. because the backing array for both of the slice is same.
original := []int{1, 2, 3, 4, 5}
new := original[0:3]
new[0] = 1000
- Map is not addresseable, but slices are. Its because the map would grow when elements are added. so it would move the map content to different address whenever it grows.
gs := map[string]gopher{}
gs[0].name = "change"
// Error: cannot assign to struct field gs["tall"].name in map
would change the original and have {1000,2,3,4,5}
Do not communicate by sharing memory; instead, share memory by communicating
When we have multiple goroutines running, we 've to co-oridinate and share data with them to get the final outcome.
say we've increment function on a metric, if we've n
goroutines we've to wait for it to complete before getting the final result for all of them to complete before getting the final call count.
// Incr(key string)
for i := 0; i < n; i++ {
go m.Incr("metrics.call")
}
m.Get("metrics.call") // would return inconsitent count i.e < n
we could do time.Sleep(n * t)
if you know t
is the processing time or naive way in tests. But The Incr
function could increment the metric in cache, db or make network call, so mostly we can't define the time.
we can use sync.WaitGroup
to coordinate. n goroutines so we say wg.Add(n)
and each goroutine reports its completion with wg.Done()
. Note that it have to be a pointer *sync.WaitGroup
var wg sync.WaitGroup
wg.Add(calls)
for i := 0; i < n; i++ {
go func(wg *sync.WaitGroup) {
m.Incr("incr.call")
wg.Done()
}(&wg)
}
wg.Wait() // waits till all n goroutines completes
In the Incr
since we've to maintain the metric count, we 've to store it. could be map[string]int
to hold the count for each metric.
type metric struct {
data map[string]int
}
...
func (m *metric) Incr(key string) {
m.data[key]++
}
when goroutines concurrently access/change the map. we'll have inconsistent data, In our case map it will result in concurrent map writes
. so we would 've to use locks andensure mutual exclusive access of data for writes. so the above incr would hold the lock and unlock after write.
type metric struct {
data map[string]int
sync.Mutex
}
...
func (m *metric) Incr(key string) {
m.Mutex.Lock()
defer m.Mutex.Unlock()
m.data[key]++
}
Acquiring locks would've its own cost, and if the implementation/logic becomes complex the time you would acquire a lock increase and will impact the performance.
Alternatively, we could share data across goroutines using channels.
type Counter struct {
occurence chan string
data map[string]int
}
func (c *Counter) Incr(key string) {
c.occurence <- key
}
func (c *Counter) process() {
for {
select {
case key := <-c.occurence:
c.data[key]++
case <-c.stop:
return
}
}
}
go c.process() // a separate goroutine for receiving metric name, and increments the count.
In this code we've used a single goroutine and achieved the same. No concurrent writes because there's only one goroutine which writes to the map. This is a contrived example, this case locks is more apt. but channels could be used in real world scenarios. workers, http throttling requests, collating responses from different HTTP apis etc.
Another simple example which uses channels to share data. Given a huge slice, main goroutine slices the original slice, and spins up goroutines and computes the sum of sub slice.
The goroutines send the subsum via channel. we could've had the workers listen to a channel for []int
and send the sum in output channel instead of data[] int param.
// goroutine or worker which computes the sum
func add(data []int, result chan<- int, wg *sync.WaitGroup) {
defer wg.Done()
var sum int
for _, d := range data {
sum += d
}
result <- sum // send the sum back to the collector
}
The collector sums up all the result of workers and returns the data with channel.
func collect(subsum <-chan int, totalSum chan<- int) {
total := 0
for s := range subsum {
total += s
}
totalSum <- total
}
check the complete code.
Know more about buffered channels, how the goroutines waits while writing/reading, using for { select { case
to wait without blocking, and closing channels and reading safely data, ok := somechan
, and closing goroutines properly to avoid goroutine leaks before using it in prod.
Look at gotchas code for more information.
A simple CRUD service, to explore and explain gRPC. Code slide
test for simple ping HandlerFunc
func TestPing(t *testing.T) {
w := httptest.NewRecorder()
r, _ := http.NewRequest("POST", "/ping", nil)
Ping(w, r)
assert.Equal(t, 200, w.Code)
assert.Equal(t, "pong", w.Body.String())
}
Sample tests for simple `http.Handler with testify mock
func TestDbError(t *testing.T) {
c := new(checker)
c.On("Ping").Times(1).Return(errors.New("someerr"))
h := HealthChecker(c)
w := httptest.NewRecorder()
r, _ := http.NewRequest("GET", "/someurl", nil)
h.ServeHTTP(w, r)
c.AssertExpectations(t)
assert.Equal(t, 503, w.Code)
assert.Equal(t, "Service Unavailable", w.Body.String())
}
The code is meant for playing, experimenting and understanding the concepts. Please follow Effective Go for idiomatic go. please create issues / PR if you think any change should be accomodated.
go vet
errors and golint
suggestions have to be accomodated too.
- go1.8 slides
- go tools slides
- Testing in go Golang Meetup XXI
- Embedding in go screencast Golang Meetup XXV
- gotchas in go Golang Meetup XXVI screencast
- slice of Slices at Golang Meetup XXVIII
- go 1.10 slides screencast
- concurrency screencast Golang Meetup XXX