All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Tidy up cgo flags
- ctype
long
caused compiling error in MacOS as noted on #44. Not working on linux box.
- [#111, #112, #113] Fixed concurrency memory leak
- #118 Fixed incorrect cbool conversion
- Upgrade libtorch v2.1.0
- Upgrade libtorch v2.0.0
- Upgrade Go version 1.20
- Switched to use hybrid of Go garbage collection and manually memory management
- Fixed #100 #102
- Fixed incorrect indexing at
dutil/Dataset.Next()
- Added
nn.MSELoss()
- reworked
ts.Format()
- Added conv2d benchmark
- Fixed #88 memory leak at
example/char-rnn
- Added missing tensor
Stride()
andMustDataPtr()
,IsMkldnn
,MustIsMkldnn
,IsContiguous
,MustIsContiguous
- Added ts
New()
- Added
WsName
andBsName
fields tonn.LayerNorm.Config
- [#70] Upgraded to libtorch 1.11
- Added API
Path.Remove()
;Path.MustRemove()
- Fixed
dutil/MapDataset
- [#69] change package name
tensor
->ts
for easy coding. - [#68] simplify
VarStore
struct and adde more APIs forVarStore
andOptimizer
- Fixed pickle with zero data length
- Added
gotch.CleanCache()
API.
- Fixed wrong
cacheDir
and switch off logging. - Added more pickle classes to handle unpickling
- Added subpackage
pickle
. Now we can load directly Python Pytorch pretrained model without any Python script conversion. - Added
gotch.CachePath()
andgotch.ModelUrls
- Remove Travis CI for now.
- fixed
tensor.OfSlice()
throw error due to "Unsupported Go type" (e.g. []float32) - added
nn.Path.Paths()
method - added
nn.VarStore.Summary()
method - fixed incorrect tensor method
ts.Meshgrid
->Meshgrid
- added new API
ConstantPadNdWithVal
ato_constant_pad_nd
with padding value. - fixed "nn/rnn NewLSTM() clashed weight names"
- fixed some old API at
vision/aug/function.go
- fixed
tensor.OfSlice()
not supporting[]int
data type - fixed make
tensor.ValueGo()
returning[]int
instead of[]int32
- added more building block modules: Dropout, MaxPool2D, Parameter, Identity
- added nn.BatchNorm.Forward() with default training=true
- added exposing
tensor.Ctensor()
- added API
tensor.FromCtensor()
- [#67]: fixed incorrect type casting at
atc_cuda_count
- Upgraded to libtorch 1.10
- #58 Fixed incorrect converting IValue from CIValue case 1 (Tensor).
- Added Conv3DConfig and Conv3DConfig Option
- Added missing Tensor methods APIs those return multiple tensors (e.g.
tensor.Svd
).
- Dropped libtch
dummy_cuda_dependency()
andfake_cuda_dependency()
as libtorch ldd linking Okay now.
- Export nn/scheduler DefaultSchedulerOptions()
- Added nn/scheduler NewLRScheduler()
- Added nn/conv config options
- fixed cuda error
undefined reference to 'at::cuda::warp_size()'
- Update libtorch to 1.9. Generated 1716 APIs. There are APIs naming changes ie.
Name1
change toNameDim
orNameTensor
.
- Fixed temporary fix huge number of learning group returned from C at
libtch/tensor.go AtoGetLearningRates
- Fixed incorrect
nn.AdamWConfig
and some documentation. - Fixed - reworked on
vision.ResNet
andvision.DenseNet
to fix incorrect layers and memory leak - Changed
dutil.DataLoader.Reset()
to reshuffle when resetting DataLoader if flag is true - Changed
dutil.DataLoader.Next()
. Deleted case batch size == 1 to make consistency by always returning items in a slice[]element dtype
even with batchsize = 1. - Added
nn.CrossEntropyLoss
andnn.BCELoss
- Fixed
tensor.ForwardIs
returnTuple
andTensorList
instead of always returningTensorList
- Changed exporting augment options and make ColorJitter forward output dtype
uint8
for chaining with other augment options. - #45 Fixed
init/RandInt
incorrect initialization - #48 Fixed
init/RandInit
when init with mean = 0.0.
- Fixed multiple memory leakage at
vision/image.go
- Fixed memory leakage at
dutil/dataloader.go
- Fixed multiple memory leakage at
efficientnet.go
- Added
dataloader.Len()
method - Fixed deleting input tensor inside function at
tensor/other.go
tensor.CrossEntropyForLogits
andtensor.AccuracyForLogits
- Added warning to
varstore.LoadPartial
when mismatched tensor shapes between source and varstore. - Fixed incorrect message mismatched tensor shape at
nn.Varstore.Load
- Fixed incorrect y -> x at
vision/aug/affine.go
getParam func - Fixed double free tensor at
vision/aug/function.go
Equalize func. - Changed
vision/aug
all input image should beuint8
(Byte) dtype and transformed output has the same dtype (uint8) so thatCompose()
can compose any transformer options. - Fixed wrong result of
aug.RandomAdjustSharpness
- Fixed memory leak at
aug/function.getAffineGrid
- Changed
vision/aug
and correct ColorJitter - Changed
vision/aug
and correct Resize - Changed
dutil/sampler
to accept batchsize from 1. - Fixed double free in
vision/image.go/resizePreserveAspectRatio
Skip this tag
Same as [0.3.10]
- Update installation at README.md
- [#38] fixed JIT model
- Added Optimizer Learning Rate Schedulers
- Added AdamW Optimizer
- #24, #26: fixed memory leak.
- #30: fixed varstore.Save() randomly panic - segmentfault
- #32: nn.Seq Forward return nil tensor if length of layers = 1
- [#36]: resolved image augmentation
- #20: fixed IValue.Value() method return
[]interface{}
instead of[]Tensor
- Added trainable JIT Module APIs and example/jit-train. Now, a Python Pytorch model
.pt
can be loaded then continue training/fine-tuning in Go.
- Added
dutil
sub-package that serves PytorchDataSet
andDataLoader
concepts
- Added function
gotch.CudaIfAvailable()
. NOTE that:device := gotch.NewCuda().CudaIfAvailable()
will throw error if CUDA is not available.
- Switched back to install libtorch inside gotch library as go init() function is triggered after cgo called.
- #4 Automatically download and install Libtorch and setup environment variables.
- #6: Go native tensor print using
fmt.Formatter
interface. Now, a tensor can be printed out like:fmt.Printf("%.3f", tensor)
(for float type)
- nn/sequential: fixed missing case number of layers = 1 causing panic
- nn/varstore: fixed(nn/varstore): fixed nil pointer at LoadPartial due to not break loop
- Changed to use
map[string]*Tensor
atnn/varstore.go
- Changed to use
*Path
argument ofNewLayerNorm
method atnn/layer-norm.go
- Lots of clean-up return variables i.e. retVal, err
- Updated to Pytorch C++ APIs v1.7.0
- Switched back to
lib.AtoAddParametersOld
as theato_add_parameters
has not been implemented correctly. Using the updated API will cause optimizer stops working.
- Convert all APIs to using Pointer Receiver
- Added drawing image label at
example/yolo
example - Added some example images and README files for
example/yolo
andexample/neural-style-transfer
- Added
tensor.SaveMultiNew
- Reverse changes #10 to original.
- #10:
ts.Drop()
andts.MustDrop()
now can call multiple times without panic