10/19/2023 0 Comments Nn sequential use![]() Image->To_Tensor (if it is a single image, unsqueeze()) -> perform various processing -> ToPILImage (if it is a picture, it must be squeeze() before converting it to Image) -> Image 2. If there is only one data input, you must call tensor.unsqueeze(0) or tensor to disguise the data as a batch with batch_size=1 Second, the commonly used neural network layer 1. Input and output shapes (batch_size, in_features) (batch_size, out_features) These custom layers have assumptions about the input shape: the input is not a single data, but a batch. For example, nn.Linear includes two learnable parameters w and b, without sub-module.ģ. Properties, learnable parameters, sub-modules. Parameters of the constructor, such as nn.Linear(in_features,out_features)Ģ. The submodule name of the module is the self submodule name para_name, for example, self_submodule_w Points to note when reading module-related documentationġ. Module parameters are generally self.para_name (e.g. About the naming convention of learnable parameters Its learnable parameters will also become the learnable parameters of the current module. In the constructor, you can use the previously defined Linear layer (module) as a submodule of the current module object. In addition to parameters, Module also contains sub-modules, and the main module can recursively find parameters in sub-modules. The module object can contain sub-modules, and the Module can automatically detect its own Parameter and use it as a learning parameter. The forward function implements the forward propagation process, and its input can be one or more tensor.ĥ. And generally not forward (layer.farword) is explicitly called, but layer (input), will execute forward ().Ĥ. Because parameters are automatically derived, there is no need to write and call the backward() function after calling forward(). The learnable parameters are placed in the constructor, and the parameters are stored in the Module in the form of parameters (a kind of tensor, the default is automatic derivation) through nn.Parameter(), and iterated through parameters() or named_parameters() Way to return the value of the learnable parameterģ. Based on nn.Module when implementing one of your own layers, you need to call the constructor of the Module in the constructorĢ. Generally based on nn.Module, write your own layer of nn/nnġ. nn.Module represents the nn of a certain layer or layers. It is more convenient to provide nn.Module. ![]() When building a deep learning model, using autograd is too abstract, low-level, and has a large amount of code to implement. Convolution layer (some have deconvolution layer) Second, the commonly used neural network layer. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |