This package implements Mutex
and RWMutex
. They are similar to sync.Mutex
and sync.RWMutex
, but
they track which goroutine locked the mutex and will not cause a deadlock if
the same goroutine will try to lock the same mutex again.
type myEntity struct {
gorex.Mutex
}
func (ent *myEntity) func1() {
ent.Lock()
defer ent.Unlock()
.. some stuff ..
ent.func2() // will not get a deadlock here!
.. other stuff ..
}
func (ent *myEntity) func2() {
ent.Lock()
defer ent.Unlock()
.. more stuff ..
}
The same in other syntax:
type myEntity struct {
gorex.Mutex
}
func (ent *myEntity) func1() {
ent.LockDo(func() {
.. some stuff ..
ent.func2() // will not get a deadlock here!
.. other stuff ..
})
}
func (ent *myEntity) func2() {
ent.LockDo(func(){
.. more stuff ..
})
}
locker := &goroutine.RWMutex{}
locker.RLockDo(func() {
.. do some read-only stuff ..
if cond {
return
}
locker.LockDo(func() { // will not get a deadlock here!
.. do write stuff ..
})
})
But you still will get a deadlock if you do this way:
var locker = &gorex.RWMutex{}
func someFunc() {
locker.RLockDo(func() {
.. do some read-only stuff ..
if cond {
return
}
locker.LockDo(func() { // you will get a deadlock here!
.. do write stuff ..
})
})
}()
func main() {
go someFunc()
go someFunc()
}
because there could be a situation that a resource is blocked by a RLockDo
from
both goroutines and both goroutines waits (on LockDo
) until other goroutine
will finish RLockDo
. But still you will easily see the reason of deadlocks due
to LockDo
-s in the call stack trace.
It's essentially slower than bare sync.Mutex
/sync.RWMutex
:
goos: linux
goarch: amd64
pkg: github.com/xaionaro-go/gorex
Benchmark/Lock-Unlock/single/sync.Mutex-8 77933413 15.2 ns/op 0 B/op 0 allocs/op
Benchmark/Lock-Unlock/single/sync.RWMutex-8 46052574 26.1 ns/op 0 B/op 0 allocs/op
Benchmark/Lock-Unlock/single/Mutex-8 20281420 58.6 ns/op 0 B/op 0 allocs/op
Benchmark/Lock-Unlock/single/RWMutex-8 13518639 87.1 ns/op 0 B/op 0 allocs/op
Benchmark/Lock-Unlock/parallel/sync.Mutex-8 10836991 111 ns/op 0 B/op 0 allocs/op
Benchmark/Lock-Unlock/parallel/sync.RWMutex-8 9065725 133 ns/op 0 B/op 0 allocs/op
Benchmark/Lock-Unlock/parallel/Mutex-8 9425310 123 ns/op 2 B/op 0 allocs/op
Benchmark/Lock-Unlock/parallel/RWMutex-8 5309696 213 ns/op 4 B/op 0 allocs/op
Benchmark/RLock-RUnlock/single/sync.RWMutex-8 76609815 15.2 ns/op 0 B/op 0 allocs/op
Benchmark/RLock-RUnlock/single/RWMutex-8 25071478 47.9 ns/op 0 B/op 0 allocs/op
Benchmark/RLock-RUnlock/parallel/sync.RWMutex-8 25705654 48.3 ns/op 0 B/op 0 allocs/op
Benchmark/RLock-RUnlock/parallel/RWMutex-8 14786738 80.9 ns/op 0 B/op 0 allocs/op
Benchmark/Lock-ed:Lock-Unlock/single/Mutex-8 31392260 38.2 ns/op 0 B/op 0 allocs/op
Benchmark/Lock-ed:Lock-Unlock/single/RWMutex-8 32588916 37.6 ns/op 0 B/op 0 allocs/op
Benchmark/RLock-ed:RLock-RUnlock/single/RWMutex-8 26416754 46.1 ns/op 0 B/op 0 allocs/op
Benchmark/RLock-ed:RLock-RUnlock/parallel/RWMutex-8 13113901 88.7 ns/op 0 B/op 0 allocs/op
PASS
ok github.com/xaionaro-go/gorex 20.321s
But sometimes it allows you to think more about strategic problems ("this stuff should be edited atomically, so I'll be able to...") instead of wasting time on tactical problems ("how to handle those locks") :)
Of course this package does not solve all possible reasons of deadlocks,
but it also provides a way to debug what's going on. First of all,
I recommend you to use LockDo
instead of bare Lock
when possible:
- It will make sure you haven't forgot to unlock the mutex.
- It will be shown in the call stack trace (so you'll see what's going on).
If you already have a problem with a deadlock, then I recommend you to write
and unit/integration test which can reproduce the deadlock situation and then
limit by time gorex.DefaultInfiniteContext
.
On a deadlock it will panic and will show the call stack trace of every routine
which holds the lock.
For example in my case I saw:
monopolized by:
/home/xaionaro/.gimme/versions/go1.13.linux.amd64/src/runtime/proc.go:2664 (runtime.goexit1)
panic: The InfiniteContext is done...
So it seems a routine already exited (and never released the lock). So a support
of the build tag deadlockdebug
was added, which will print a call
stack trace of a lock which was never released (and goroutine already exited). Specifically
in my case it printed:
$ go test ./... -timeout 1s -bench=. -benchtime=100ms -tags deadlockdebug
...
an opened lock {LockerPtr:824636154976 IsWrite:true} which was never released (and the goroutine already exited):
/home/xaionaro/go/pkg/mod/github.com/xaionaro-go/gorex@v0.0.0-20200308222358-b650fa4b5b14/rw_mutex.go:72 (github.com/xaionaro-go/gorex.(*RWMutex).lock)
/home/xaionaro/go/pkg/mod/github.com/xaionaro-go/gorex@v0.0.0-20200308222358-b650fa4b5b14/rw_mutex.go:43 (github.com/xaionaro-go/gorex.(*RWMutex).Lock)
/home/xaionaro/go/src/github.com/xaionaro-go/secureio/session.go:1480 (github.com/xaionaro-go/secureio.(*Session).sendDelayedNow)
/home/xaionaro/go/src/github.com/xaionaro-go/secureio/session.go:1774 (github.com/xaionaro-go/secureio.(*Session).startKeyExchange.func2)
/home/xaionaro/go/src/github.com/xaionaro-go/secureio/key_exchanger.go:449 (github.com/xaionaro-go/secureio.(*keyExchanger).sendSuccessNotifications)
/home/xaionaro/go/src/github.com/xaionaro-go/secureio/key_exchanger.go:408 (github.com/xaionaro-go/secureio.(*keyExchanger).Handle.func2)
/home/xaionaro/go/src/github.com/xaionaro-go/secureio/key_exchanger.go:242 (github.com/xaionaro-go/secureio.(*keyExchanger).LockDo)
...
So I opened line session.go:1480
added defer sess.delayedWriteBuf.Unlock()
and it fixed the problem :)
I found 2 other implementations:
The first one is broken:
panic: unsupported go version go1.13
The second one sleeps in terms of millisecond, which:
- Give good result if the lock is short-living: performance is much better (than here).
- Continuously consumes CPU resources on long-living locks, while I'm developing an application for mobile phones and would like to avoid such problems.
- Does not support
RLock
/RUnlock
.