Transformers documentation
ConvNeXT
This model was released on 2022-01-10 and added to Hugging Face Transformers on 2022-02-07.
ConvNeXT
Overview
The ConvNeXT model was proposed in A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them.
The abstract from the paper is the following:
The “Roaring 20s” of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model. A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers (e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually “modernize” a standard ResNet toward the design of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets.
ConvNeXT architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ConvNeXT.
- ConvNextForImageClassification is supported by this example script and notebook.
- See also: Image classification task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ConvNextConfig
class transformers.ConvNextConfig
< source >( output_hidden_states: bool | None = False return_dict: bool | None = True dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None chunk_size_feed_forward: int = 0 is_encoder_decoder: bool = False id2label: dict[int, str] | dict[str, str] | None = None label2id: dict[str, int] | dict[str, str] | None = None problem_type: typing.Optional[typing.Literal['regression', 'single_label_classification', 'multi_label_classification']] = None tokenizer_class: str | transformers.tokenization_utils_base.PreTrainedTokenizerBase | None = None num_channels: int = 3 patch_size: int | list[int] | tuple[int, int] = 4 num_stages: int = 4 hidden_sizes: list[int] | tuple[int, ...] | None = (96, 192, 384, 768) depths: list[int] | tuple[int, ...] | None = (3, 3, 9, 3) hidden_act: str = 'gelu' initializer_range: float = 0.02 layer_norm_eps: float = 1e-12 layer_scale_init_value: float = 1e-06 drop_path_rate: float = 0.0 image_size: int | list[int] | tuple[int, int] = 224 _out_features: list[str] | None = None _out_indices: list[int] | None = None )
Parameters
- output_hidden_states (
bool, optional, defaults toFalse) — Whether or not the model should return all hidden-states. - return_dict (
bool, optional, defaults toTrue) — Whether to return aModelOutput(dataclass) instead of a plain tuple. - dtype (
Union[str, torch.dtype], optional) — The chunk size of all feed forward layers in the residual attention blocks. A chunk size of0means that the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processesn< sequence_length embeddings at a time. For more information on feed forward chunking, see How does Feed Forward Chunking work?. - chunk_size_feed_forward (
int, optional, defaults to0) — Thedtypeof the weights. This attribute can be used to initialize the model to a non-defaultdtype(which is normallyfloat32) and thus allow for optimal storage allocation. For example, if the saved model isfloat16, ideally we want to load it back using the minimal amount of memory needed to loadfloat16weights. - is_encoder_decoder (
bool, optional, defaults toFalse) — Whether the model is used as an encoder/decoder or not. - id2label (
Union[dict[int, str], dict[str, str]], optional) — A map from index (for instance prediction index, or target index) to label. - label2id (
Union[dict[str, int], dict[str, str]], optional) — A map from label to index for the model. - problem_type (
Literal[regression, single_label_classification, multi_label_classification], optional) — Problem type forXxxForSequenceClassificationmodels. Can be one of"regression","single_label_classification"or"multi_label_classification". - tokenizer_class (
Union[str, ~tokenization_utils_base.PreTrainedTokenizerBase], optional) — The class name of model’s tokenizer. - num_channels (
int, optional, defaults to3) — The number of input channels. - patch_size (
Union[int, list[int], tuple[int, int]], optional, defaults to4) — The size (resolution) of each patch. - num_stages (
int, optional, defaults to 4) — The number of stages in the model. - hidden_sizes (
Union[list[int], tuple[int, ...]], optional, defaults to(96, 192, 384, 768)) — Dimensionality (hidden size) at each stage of the model. - depths (
Union[list[int], tuple[int, ...]], optional, defaults to(3, 3, 9, 3)) — Depth of each layer in the Transformer. - hidden_act (
str, optional, defaults togelu) — The non-linear activation function (function or string) in the decoder. For example,"gelu","relu","silu", etc. - initializer_range (
float, optional, defaults to0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - layer_norm_eps (
float, optional, defaults to1e-12) — The epsilon used by the layer normalization layers. - layer_scale_init_value (
float, optional, defaults to1e-06) — Scale to use in the self-attention layers. 0.1 for base, 1e-6 for large. Set 0 to disable layer scale. - drop_path_rate (
float, optional, defaults to0.0) — Drop path rate for the patch fusion. - image_size (
Union[int, list[int], tuple[int, int]], optional, defaults to224) — The size (resolution) of each image. - out_features (
list[str], optional) — Names of the intermediate hidden states (feature maps) to return from the backbone. One of"stem","stage1","stage2", etc. - out_indices (
list[int], optional) — Indices of the intermediate hidden states (feature maps) to return from the backbone. Each index corresponds to one stage of the model.
This is the configuration class to store the configuration of a ConvNextModel. It is used to instantiate a Convnext model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the facebook/convnext-tiny-224
Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.
Example:
>>> from transformers import ConvNextConfig, ConvNextModel
>>> # Initializing a ConvNext convnext-tiny-224 style configuration
>>> configuration = ConvNextConfig()
>>> # Initializing a model (with random weights) from the convnext-tiny-224 style configuration
>>> model = ConvNextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.configConvNextImageProcessor
class transformers.ConvNextImageProcessor
< source >( **kwargs: typing_extensions.Unpack[transformers.models.convnext.image_processing_convnext.ConvNextImageProcessorKwargs] )
Parameters
- crop_pct (
float, kwargs, optional, defaults toself.crop_pct) — Percentage of the image to crop. Only has an effect if size < 384. - **kwargs (ImagesKwargs, optional) — Additional image preprocessing options. Model-specific kwargs are listed above; see the TypedDict class for the complete list of supported arguments.
Constructs a ConvNextImageProcessor image processor.
preprocess
< source >( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']] *args **kwargs: typing_extensions.Unpack[transformers.processing_utils.ImagesKwargs] ) → ~image_processing_base.BatchFeature
Parameters
- images (
Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, list[PIL.Image.Image], list[numpy.ndarray], list[torch.Tensor]]) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, setdo_rescale=False. - return_tensors (
stror TensorType, optional) — Returns stacked tensors if set to'pt', otherwise returns a list of tensors. - **kwargs (ImagesKwargs, optional) — Additional image preprocessing options. Model-specific kwargs are listed above; see the TypedDict class for the complete list of supported arguments.
Returns
~image_processing_base.BatchFeature
- data (
dict) — Dictionary of lists/arrays/tensors returned by the call method (‘pixel_values’, etc.). - tensor_type (
Union[None, str, TensorType], optional) — You can give a tensor_type here to convert the lists of integers in PyTorch/Numpy Tensors at initialization.
ConvNextImageProcessorPil
class transformers.ConvNextImageProcessorPil
< source >( **kwargs: typing_extensions.Unpack[transformers.models.convnext.image_processing_convnext.ConvNextImageProcessorKwargs] )
Parameters
- crop_pct (
float, kwargs, optional, defaults toself.crop_pct) — Percentage of the image to crop. Only has an effect if size < 384. - **kwargs (ImagesKwargs, optional) — Additional image preprocessing options. Model-specific kwargs are listed above; see the TypedDict class for the complete list of supported arguments.
Constructs a ConvNextImageProcessor image processor.
preprocess
< source >( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']] *args **kwargs: typing_extensions.Unpack[transformers.processing_utils.ImagesKwargs] ) → ~image_processing_base.BatchFeature
Parameters
- images (
Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, list[PIL.Image.Image], list[numpy.ndarray], list[torch.Tensor]]) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, setdo_rescale=False. - return_tensors (
stror TensorType, optional) — Returns stacked tensors if set to'pt', otherwise returns a list of tensors. - **kwargs (ImagesKwargs, optional) — Additional image preprocessing options. Model-specific kwargs are listed above; see the TypedDict class for the complete list of supported arguments.
Returns
~image_processing_base.BatchFeature
- data (
dict) — Dictionary of lists/arrays/tensors returned by the call method (‘pixel_values’, etc.). - tensor_type (
Union[None, str, TensorType], optional) — You can give a tensor_type here to convert the lists of integers in PyTorch/Numpy Tensors at initialization.
ConvNextModel
class transformers.ConvNextModel
< source >( config )
Parameters
- config (ConvNextModel) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Convnext Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( pixel_values: torch.FloatTensor | None = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) → BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
- pixel_values (
torch.FloatTensorof shape(batch_size, num_channels, image_size, image_size), optional) — The tensors corresponding to the input images. Pixel values can be obtained using ConvNextImageProcessor. SeeConvNextImageProcessor.__call__()for details (processor_classuses ConvNextImageProcessor for processing images).
Returns
BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A BaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ConvNextConfig) and inputs.
The ConvNextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
last_hidden_state (
torch.FloatTensorof shape(batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.pooler_output (
torch.FloatTensorof shape(batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, num_channels, height, width).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
ConvNextForImageClassification
class transformers.ConvNextForImageClassification
< source >( config )
Parameters
- config (ConvNextForImageClassification) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
ConvNext Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( pixel_values: torch.FloatTensor | None = None labels: torch.LongTensor | None = None **kwargs ) → ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
- pixel_values (
torch.FloatTensorof shape(batch_size, num_channels, image_size, image_size), optional) — The tensors corresponding to the input images. Pixel values can be obtained using ConvNextImageProcessor. SeeConvNextImageProcessor.__call__()for details (processor_classuses ConvNextImageProcessor for processing images). - labels (
torch.LongTensorof shape(batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in[0, ..., config.num_labels - 1]. Ifconfig.num_labels == 1a regression loss is computed (Mean-Square loss), Ifconfig.num_labels > 1a classification loss is computed (Cross-Entropy).
Returns
ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ConvNextConfig) and inputs.
The ConvNextForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
- loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) — Classification (or regression if config.num_labels==1) loss. - logits (
torch.FloatTensorof shape(batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). - hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape(batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the model at the output of each stage.
Example:
>>> from transformers import AutoImageProcessor, ConvNextForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> image_processor = AutoImageProcessor.from_pretrained("facebook/convnext-tiny-224")
>>> model = ConvNextForImageClassification.from_pretrained("facebook/convnext-tiny-224")
>>> inputs = image_processor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
...