Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

branch update #34

Merged
merged 701 commits into from
Apr 23, 2024
Merged

branch update #34

merged 701 commits into from
Apr 23, 2024

Conversation

Cemberk
Copy link
Collaborator

@Cemberk Cemberk commented Apr 23, 2024

ds

yhshin11 and others added 30 commits November 27, 2023 14:59
* fix assisted decoding attention_cat

* fix attention_mask for assisted decoding

* fix attention_mask len

* fix attn len

* Use a more clean way to prepare assistant models inputs

* fix param meaning

* fix param name

* fix assistant model inputs

* update token type ids

* fix assistant kwargs copy

* add encoder-decoder tests of assisted decoding

* check if assistant kwargs contains updated keys

* revert test

* fix whisper tests

* fix assistant kwargs

* revert whisper test

* delete _extend funcs
…label with "-" (huggingface#27325)

* fix group_sub_entities bug

* add space
* Fix code snippet

* Improve code snippet
* docs: replace torch.distributed.run by torchrun

 `transformers` now officially support pytorch >= 1.10.
 The entrypoint `torchrun`` is present from 1.10 onwards.

Signed-off-by: Peter Pan <[email protected]>

* Update src/transformers/trainer.py

with @ArthurZucker's suggestion

Co-authored-by: Arthur <[email protected]>

---------

Signed-off-by: Peter Pan <[email protected]>
Co-authored-by: Arthur <[email protected]>
* Update default ChatML template

* Update docs/warnings

* Update docs/source/en/chat_templating.md

Co-authored-by: Arthur <[email protected]>

* Slight rework

---------

Co-authored-by: Arthur <[email protected]>
* translate work

* update

* update

* update [[autodoc]]

* Update callback.md

---------

Co-authored-by: jiaqiw <[email protected]>
* Add `model_docs`

* Add

* Update Model adoc

* Update docs/source/ja/model_doc/bark.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/model_doc/beit.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/model_doc/bit.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/model_doc/blenderbot.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/model_doc/blenderbot-small.md

Co-authored-by: Steven Liu <[email protected]>

* update reiew-1

* Update toctree.yml

* translating docs and fixes of PR huggingface#27401

* Update docs/source/ja/model_doc/bert.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/model_doc/bert-generation.md

Co-authored-by: Steven Liu <[email protected]>

* Update the model docs

---------

Co-authored-by: Steven Liu <[email protected]>
…duler_kwargs (huggingface#27595)

* Fix passing scheduler-specific kwargs through TrainingArguments `lr_scheduler_kwargs`

* Added test for lr_scheduler_kwargs
* fix

* fix

---------

Co-authored-by: ydshieh <[email protected]>
* First draft

* Add backwards compatibility

* More improvements

* More improvements

* Improve error message

* Address comment

* Add conversion script

* Fix style

* Update code snippet

* Adddress comment

* Apply suggestions from code review

Co-authored-by: amyeroberts <[email protected]>

---------

Co-authored-by: amyeroberts <[email protected]>
* Add madlad-400 models

* Add madlad-400 to the doc table

* Update docs/source/en/model_doc/madlad-400.md

Co-authored-by: amyeroberts <[email protected]>

* Fill missing details in documentation

* Update docs/source/en/model_doc/madlad-400.md

Co-authored-by: amyeroberts <[email protected]>

* Do not doctest madlad-400

Tests are timing out.

---------

Co-authored-by: amyeroberts <[email protected]>
* fixes

* more fixes

* style fix

* more fix

* comments
* first draft

* benchmarks

* feedback
* add distribution head to forecasting

* formatting

* Add generate function for forecasting

* Add generate function to prediction task

* formatting

* use argsort

* add past_observed_mask ordering

* fix arguments

* docs

* add back test_model_outputs_equivalence test

* formatting

* cleanup

* formatting

* use ACT2CLS

* formatting

* fix add_start_docstrings decorator

* add distribution head and generate function to regression task

add distribution head and generate function to regression task. Also made add PatchTSTForForecastingOutput,  PatchTSTForRegressionOutput.

* add distribution head and generate function to regression task

add distribution head and generate function to regression task. Also made add PatchTSTForForecastingOutput,  PatchTSTForRegressionOutput.

* fix typos

* add forecast_masking

* fixed tests

* use set_seed

* fix doc test

* formatting

* Update docs/source/en/model_doc/patchtst.md

Co-authored-by: NielsRogge <[email protected]>

* better var names

* rename PatchTSTTranspose

* fix argument names and docs string

* remove compute_num_patches and unused class

* remove assert

* renamed to PatchTSTMasking

* use num_labels for classification

* use num_labels

* use default num_labels from super class

* move model_type after docstring

* renamed PatchTSTForMaskPretraining

* bs -> batch_size

* more review fixes

* use hidden_state

* rename encoder layer and block class

* remove commented seed_number

* edit docstring

* Add docstring

* formatting

* use past_observed_mask

* doc suggestion

* make fix-copies

* use Args:

* add docstring

* add docstring

* change some variable names and add PatchTST before some class names

* formatting

* fix argument types

* fix tests

* change x variable to patch_input

* format

* formatting

* fix-copies

* Update tests/models/patchtst/test_modeling_patchtst.py

Co-authored-by: Patrick von Platen <[email protected]>

* move loss to forward

* Update src/transformers/models/patchtst/modeling_patchtst.py

Co-authored-by: Patrick von Platen <[email protected]>

* Update src/transformers/models/patchtst/modeling_patchtst.py

Co-authored-by: Patrick von Platen <[email protected]>

* Update src/transformers/models/patchtst/modeling_patchtst.py

Co-authored-by: Patrick von Platen <[email protected]>

* Update src/transformers/models/patchtst/modeling_patchtst.py

Co-authored-by: Patrick von Platen <[email protected]>

* Update src/transformers/models/patchtst/modeling_patchtst.py

Co-authored-by: Patrick von Platen <[email protected]>

* formatting

* fix a bug when pre_norm is set to True

* output_hidden_states is set to False as default

* set pre_norm=True as default

* format docstring

* format

* output_hidden_states is None by default

* add missing docs

* better var names

* docstring: remove default to False in output_hidden_states

* change labels name to target_values in regression task

* format

* fix tests

* change to forecast_mask_ratios and random_mask_ratio

* change mask names

* change future_values to target_values param in the prediction class

* remove nn.Sequential and make PatchTSTBatchNorm class

* black

* fix argument name for prediction

* add output_attentions option

* add output_attentions to PatchTSTEncoder

* formatting

* Add attention output option to all classes

* Remove PatchTSTEncoderBlock

* create PatchTSTEmbedding class

* use config in PatchTSTPatchify

* Use config in PatchTSTMasking class

* add channel_attn_weights

* Add PatchTSTScaler class

* add output_attentions arg to test function

* format

* Update doc with image patchtst.md

* fix-copies

* rename Forecast <-> Prediction

* change name of a few parameters to match with PatchTSMixer.

* Remove *ForForecasting class to match with other time series models.

* make style

* Remove PatchTSTForForecasting in the test

* remove PatchTSTForForecastingOutput class

* change test_forecast_head to test_prediction_head

* style

* fix docs

* fix tests

* change num_labels to num_targets

* Remove PatchTSTTranspose

* remove arguments in PatchTSTMeanScaler

* remove arguments in PatchTSTStdScaler

* add config as an argument to all the scaler classes

* reformat

* Add norm_eps for batchnorm and layernorm

* reformat.

* reformat

* edit docstring

* update docstring

* change variable name pooling to pooling_type

* fix output_hidden_states as tuple

* fix bug when calling PatchTSTBatchNorm

* change stride to patch_stride

* create PatchTSTPositionalEncoding class and restructure the PatchTSTEncoder

* formatting

* initialize scalers with configs

* edit output_hidden_states

* style

* fix forecast_mask_patches doc string

* doc improvements

* move summary to the start

* typo

* fix docstring

* turn off masking when using prediction, regression, classification

* return scaled output

* adjust output when using distribution head

* remove _num_patches function in the config

* get config.num_patches from patchifier init

* add output_attentions docstring, remove tuple in output_hidden_states

* change SamplePatchTSTPredictionOutput and SamplePatchTSTRegressionOutput to SamplePatchTSTOutput

* remove print("model_class: ", model_class)

* change encoder_attention_heads to num_attention_heads

* change norm to norm_layer

* change encoder_layers to num_hidden_layers

* change shared_embedding to share_embedding, shared_projection to share_projection

* add output_attentions

* more robust check of norm_type

* change dropout_path to path_dropout

* edit docstring

* remove positional_encoding function and add _init_pe in PatchTSTPositionalEncoding

* edit shape of cls_token and initialize it

* add a check on the num_input_channels.

* edit head_dim in the Prediction class to allow the use of cls_token

* remove some positional_encoding_type options, remove learn_pe arg, initalize pe

* change Exception to ValueError

* format

* norm_type is "batchnorm"

* make style

* change cls_token shape

* Change forecast_mask_patches to num_mask_patches. Remove forecast_mask_ratios.

* Bring PatchTSTClassificationHead on top of PatchTSTForClassification

* change encoder_ffn_dim to ffn_dim and edit the docstring.

* update variable names to match with the config

* add generation tests

* change num_mask_patches to num_forecast_mask_patches

* Add examples explaining the use of these models

* make style

* Revert "Revert "[time series] Add PatchTST (huggingface#25927)" (huggingface#27486)"

This reverts commit 78f6ed6.

* make style

* fix default std scaler's minimum_scale

* fix docstring

* close code blocks

* Update docs/source/en/model_doc/patchtst.md

Co-authored-by: amyeroberts <[email protected]>

* Update tests/models/patchtst/test_modeling_patchtst.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/patchtst/modeling_patchtst.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/patchtst/configuration_patchtst.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/patchtst/modeling_patchtst.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/patchtst/modeling_patchtst.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/patchtst/modeling_patchtst.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/patchtst/modeling_patchtst.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/patchtst/modeling_patchtst.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/patchtst/modeling_patchtst.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/patchtst/modeling_patchtst.py

Co-authored-by: amyeroberts <[email protected]>

* fix tests

* add add_start_docstrings

* move examples to the forward's docstrings

* update prepare_batch

* update test

* fix test_prediction_head

* fix generation test

* use seed to create generator

* add output_hidden_states and config.num_patches

* add loc and scale args in PatchTSTForPredictionOutput

* edit outputs if if not return_dict

* use self.share_embedding to check instead checking type.

* remove seed

* make style

* seed is an optional int

* fix test

* generator device

* Fix assertTrue test

* swap order of items in outputs when return_dict=False.

* add mask_type and random_mask_ratio to unittest

* Update modeling_patchtst.py

* add add_start_docstrings for regression model

* make style

* update model path

* Edit the ValueError comment in forecast_masking

* update examples

* make style

* fix commented code

* update examples: remove config from from_pretrained call

* Edit example outputs

* Set default target_values to None

* remove config setting in regression example

* Update configuration_patchtst.py

* Update configuration_patchtst.py

* remove config from examples

* change default d_model and ffn_dim

* norm_eps default

* set has_attentions to Trye and define self.seq_length = self.num_patche

* update docstring

* change variable mask_input to do_mask_input

* fix blank space.

* change logger.debug to logger.warning.

* remove unused PATCHTST_INPUTS_DOCSTRING

* remove all_generative_model_classes

* set test_missing_keys=True

* remove undefined params in the docstring.

---------

Co-authored-by: nnguyen <[email protected]>
Co-authored-by: NielsRogge <[email protected]>
Co-authored-by: Patrick von Platen <[email protected]>
Co-authored-by: Nam Nguyen <[email protected]>
Co-authored-by: Wesley Gifford <[email protected]>
Co-authored-by: amyeroberts <[email protected]>
…uggingface#27700)

* Update modeling_llama.py

* Update modeling_open_llama.py

* Update modeling_gpt_neox.py

* Update modeling_mistral.py

* Update modeling_persimmon.py

* Update modeling_phi.py

* Update modeling_falcon.py

* Update modeling_gpt_neox_japanese.py
* add working convertion script

* first non-working version of modeling code

* update modeling code (working)

* make style

* make fix-copies

* add config docstrings

* add config to ignore docstrings formatage due to unconventional markdown

* fix copies

* fix generation num_return_sequences

* enrich docs

* add and fix tests beside integration tests

* update integration tests

* update repo id

* add tie weights and make style

* correct naming in .md

* fix imports and so on

* correct docstrings

* fix fp16 speech forward

* fix speechencoder attention

* make style

* fix copied from

* rename SeamlessM4Tv2-v2 to SeamlessM4Tv2

* Apply suggestions on configuration

Co-authored-by: Arthur <[email protected]>

* remove useless public models

* fix private models + better naming for T2U models

* clean speech encoder relative position embeddings

* refactor chunk attention

* add docstrings to chunk attention method

* improve naming and docstrings

* rename some attention variables + add temperature sampling in T2U model

* rename DOCSTRINGS variable names

* make style + remove 2 useless config parameters

* enrich model card

* remove any attention_head reference + fix temperature in T2U

* new fmt and make style

* Apply suggestions from code review

Co-authored-by: Arthur <[email protected]>

* rename spkr_id->speaker_id and change docstrings of get_char_input_ids

* simplify v2attention

* make style

* Update seamless_m4t_v2.md

* update code and tests with last update

* update repo ids

* fill article name, abstract andauthors

* update not_doctested and slow_doc tests

---------

Co-authored-by: Arthur <[email protected]>
* partial traduction of installation

* Finish translation of installation

* Update installation.mdx

* Rename installation.mdx to installation.md

* Typos

* Update docs/source/fr/installation.md

Co-authored-by: Arthur <[email protected]>

* Update docs/source/fr/installation.md

Co-authored-by: Arthur <[email protected]>

* Update docs/source/fr/installation.md

Co-authored-by: Arthur <[email protected]>

* Update docs/source/fr/installation.md

Co-authored-by: Arthur <[email protected]>

* Update docs/source/fr/installation.md

Co-authored-by: Arthur <[email protected]>

* Update docs/source/fr/installation.md

Co-authored-by: Arthur <[email protected]>

* Update docs/source/fr/installation.md

Co-authored-by: Arthur <[email protected]>

* Update docs/source/fr/installation.md

Co-authored-by: Arthur <[email protected]>

* Update docs/source/fr/installation.md

Co-authored-by: Arthur <[email protected]>

* Update docs/source/fr/installation.md

Co-authored-by: Arthur <[email protected]>

* Address review comments

---------

Co-authored-by: Arthur <[email protected]>
* Remove config reference and pass num_patches for PatchTSTforPrediction

* ensure return_dict is properly set

---------

Co-authored-by: Wesley M. Gifford <[email protected]>
michaelfeil and others added 25 commits December 22, 2023 11:41
…on `accelerate==0.20.1` (huggingface#28171)

* fix-accelerate-version

* updated with exported ACCELERATE_MIN_VERSION,

* update string in ACCELERATE_MIN_VERSION
* add check_support_list.py

* fix

* fix

---------

Co-authored-by: ydshieh <[email protected]>
…face#28114)

* fix frames

* use smaller chunk length

* correct beam search + tentative stride

* fix whisper word timestamp in batch

* add test batch generation with return token timestamps

* Apply suggestions from code review

Co-authored-by: amyeroberts <[email protected]>
Co-authored-by: Sanchit Gandhi <[email protected]>

* clean a test

* make style + correct typo

* write clearer comments

* explain test in comment

---------

Co-authored-by: sanchit-gandhi <[email protected]>
Co-authored-by: amyeroberts <[email protected]>
Co-authored-by: Sanchit Gandhi <[email protected]>
…of bounding box. (huggingface#27842)

* fix: minor enhancement and fix in bounding box visualization example

The example that was trying to visualize the bounding box was not considering an edge case,
where the bounding box can be un-normalized. So using the same set of code, we can not get
results with a different dataset with un-normalized bounding box. This commit fixes that.

* run make clean

* add an additional note on the scenarios where the box viz code works

---------

Co-authored-by: Anindyadeep <[email protected]>
* fix llava index errors

* forward contrib credits from original implementation and fix

* better fix

* final fixes and fix all tests

* fix

* fix nit

* fix tests

* add regression tests

---------

Co-authored-by: gullalc <[email protected]>
…odules (huggingface#27950)

* v1

* add docstring

* add tests

* add awq 0.1.8

* oops

* fix test
Update modeling_utils.py
…ingface#28223)

update docs around mixing hf scheduler with deepspeed optimizer
…level timestamps computation (huggingface#28288)

* Update modeling_whisper.py to support MPS backend

Fixed some issue with MPS backend.

First, the torch.std_mean is not implemented and is not scheduled for implementation, while the single torch.std and torch.mean are.
Second, MPS backend does not support float64, so it can not cast from float32 to float64. Inverting the double() when the matrix is in the cpu fixes the issue while should not change the logic.

* Found another instruction in modeling_whisper.py not implemented byor MPS

After a load test, where I transcribed a 2 hours audio file, I got into a branch that did not fix in the previous commit.
Similar fix, where the torch.std_mean is changed into torch.std and torch.mean

* Update modeling_whisper.py removed trailing white spaces

Removed trailing white spaces

* Update modeling_whisper.py to use is_torch_mps_available()

Using is_torch_mps_available() instead of capturing the NotImplemented exception

* Update modeling_whisper.py sorting the import block

Sorting the utils import block

* Update src/transformers/models/whisper/modeling_whisper.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/whisper/modeling_whisper.py

Co-authored-by: amyeroberts <[email protected]>

* Update src/transformers/models/whisper/modeling_whisper.py

Co-authored-by: amyeroberts <[email protected]>

---------

Co-authored-by: amyeroberts <[email protected]>
* Update to pull function from proper lib

* Fix ruff formatting error

* Remove accidently added file
SWDEV-448998: Update pytest import_path location
@Cemberk Cemberk changed the title asd branch update Apr 23, 2024
@Cemberk Cemberk merged commit 72a0aeb into v4.36-release Apr 23, 2024
1494 of 2522 checks passed
@Cemberk Cemberk deleted the main branch August 14, 2024 19:00
@Cemberk Cemberk restored the main branch August 15, 2024 19:50
@Cemberk Cemberk deleted the main branch September 19, 2024 19:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.