Skip to content

Conversation

MGAMZ
Copy link

@MGAMZ MGAMZ commented Oct 17, 2025

Motivation

The maintenance progress of mmengine is getting slower and slower at present. In order to enable this excellent architecture to progress further, I have introduced some upcoming compatibility improvements and some minor optimizations for it.

Modification

Packing according to PyPA

The packaging using setup.py is deprecated and is due to be removed. Now PyPA suggests to use pyproject.toml as the packaging configuration. So refactoring them now.

Upgrade yapf from 0.32.0 to 0.43.0

The original yapf is not supported in python3.13 due to the missing of lib2to3. And in order to use the latest yapf version, the repo site need to be changed, so the pre-commit-config.yaml undergo a small modification. This should only bring minor behavior difference.

Upgrade numpy from 1.2x to 2.2.x+

The numpy.compat is deprecated, using numpy.fft in tests/data/config/lazy_module_config/test_ast_transform.py and tests/test_config/test_lazy.py.

torch.compile configuration

The torch.compile receives a lot args. And in mmengine, all contents in compile from any config file will be transferred to torch.compile.
The mmengine checks several dependencies when user specifies compile in their config file, and making hasattr(config, compile) as the flag of enabling torch.compile.
However this is not exact. The compile config can include an arg disable to state if the compiler is actually working. So the mmengine needs to carefully determine if the user is actually enabling the torch.compile. This caused a small modification in mmengine/_strategy/base.py

Message Hub error caught

The message hub receives critical Tensor input and displays them. In it's procedure, the item method is called when handling values (can be lr, loss, metrics, time, epoch, etc...) When there exists some invalid values, the message simply crashes the process as there is an assertion.

Considering the user is easy to print something bad to the message hub at ANY code line, a more robust way is to should a warning and returns the value and see if the directly returned value can be successfully processed by outer functions.

Message Hub ignore torch.compile

The message hub's operation will inevitablely triger graph breaking during compile, so disabling it now.

Support for Python-Like config file containing model_warpper_constructor or model_wrapper

The current parsing of constructor_cfg does not support the following case (python-style configuration file):

optim_wrapper = dict(
    type = DeepSpeedOptimWrapper,
    optimizer = dict(type=AdamW, lr=lr, weight_decay=weight_decay),
    accumulative_counts = grad_accumulation,
    constructor = dict(type=DefaultOptimWrapperConstructor),
)

So adding the corresponding init logic:

constructor_cfg = optim_wrapper_cfg.pop('constructor', None)
if constructor_cfg is None:
    constructor_cfg = dict(type=DefaultOptimWrapperConstructor)

The similar issue exist on model_wrapper_cfg too, so there're some changes in mmengine.runner.runner.wrap_model too.

torch.load usage

The torch.load now requires a weights_only arg. By default, it should be set to False to prevent any risks.

Resume strategy

Advance dataloader to skip data that has already been trained on InfiniteSampled dataloader is non-sense. And could waste many times for data preprocessing, as the next(self.dataloader_iterator) operation actually performs all preprocessing steps.

Logic Breaking

In some scenarios where we need to make sure that all data are browsed at the same frequency after the model is trained. This modification might be insuitable.

Removal deprecation

  • torch.cuda.amp is no longer used, replacing it with from torch.amp import GradScaler, and then set GradScaler = partial(amp_GradScaler, device='cuda')
  • pkg_resources is no longer supported and is due to be completed abandoned in Nov. 2025. Replacing it with importlib. This causes many modifications in mmengine/utils/package_utils.py

Other fix

  • Lint Issues
  • Improve vis_backend error report hint.
  • Make Lint Happy.

Include these PR

#1650
#1654 (partially)
#1639
#1610
#1617
#1608
(Maybe missing some PRs)

Checklist

Due to my personal effort, I am unable to check everything here.

  1. Pre-commit or other linting tools are used to fix the potential lint issues.
  2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDetection or MMPretrain.
  4. The documentation has been modified accordingly, like docstring or example tutorials.

MGAMZ added 30 commits July 21, 2024 17:44
FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
FSDP.optim_state_dict_to_load requires the following parameters:

model: Module,
optim: Optimizer,
optim_state_dict: Dict[str, Any]
…tions

The current runner implementation has not yet supported for pure-python style configurations on model wrapper class. I follow the mainstream implementation to support this feature.
This may be due to the version confliction. Newer PyTorch may have introduced this optimizer.
2. Add torch compiler disable flag to message hub class.
3. The compile-time fault override has been moved from history buffer to message hub.
4. The MMDistributedDataParallel module has now been recovered to original MMEngine implementation. The reason for the modification at that time may be related to the train_step function modification at earlier projects. Such modification will be achived by inheriting a new class in the future.
@MGAMZ MGAMZ changed the title Dev/contri 251017 Multiple Enhancement to Make MMENGINE walk further Oct 17, 2025
@MGAMZ
Copy link
Author

MGAMZ commented Oct 17, 2025

Another great project has withered away.

@MGAMZ MGAMZ marked this pull request as ready for review October 17, 2025 08:28
@Copilot Copilot AI review requested due to automatic review settings October 17, 2025 08:28
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR modernizes packaging and test compatibility, updates third-party dependency usage, and makes extensive formatting cleanups across tests.

  • Replace deprecated pkg_resources and numpy.compat usages with importlib.metadata and numpy.fft.
  • Widespread test code formatting/alignment improvements to satisfy updated linters and style.
  • Minor import and skip condition adjustments to improve compatibility.

Reviewed Changes

Copilot reviewed 156 out of 159 changed files in this pull request and generated no comments.

Show a summary per file
File Description
tests/test_visualizer/test_visualizer.py Formatting and multi-line argument alignment in visualizer tests.
tests/test_visualizer/test_vis_backend.py Formatting and alignment in vis backend tests.
tests/test_utils/test_timer.py Skip condition formatting tweaks.
tests/test_utils/test_progressbar.py Formatting and argument alignment in progress bar tests.
tests/test_utils/test_package_utils.py Replace pkg_resources with importlib.metadata; update expected exceptions.
tests/test_utils/test_misc.py Minor formatting changes.
tests/test_utils/test_dl_utils/test_setup_env.py Dict formatting update.
tests/test_testing/test_runner_test_case.py Formatting adjustments.
tests/test_structures/test_pixel_data.py Dict formatting changes.
tests/test_structures/test_label_data.py Formatting and skip condition alignment.
tests/test_structures/test_instance_data.py Dict and error message formatting tweaks.
tests/test_structures/test_data_element.py Setter formatting, dict alignment, and skip condition formatting.
tests/test_strategies/test_fsdp.py Config dict formatting and alignment in FSDP tests.
tests/test_runner/test_runner.py Extensive formatting; minor message text retained.
tests/test_runner/test_log_processor.py Formatting and dict alignment in log processor tests.
tests/test_runner/test_checkpoint.py Formatting of checkpoint-related tests.
tests/test_registry/test_registry.py Formatting and registry tests adjustments.
tests/test_registry/test_build_functions.py Formatting updates to builder tests.
tests/test_optim/test_scheduler/test_param_scheduler.py Formatting; retains some mis-spelled identifiers.
tests/test_optim/test_scheduler/test_momentum_scheduler.py Formatting and alignment.
tests/test_optim/test_scheduler/test_lr_scheduler.py Formatting and alignment.
tests/test_optim/test_optimizer/test_optimizer_wrapper_dict.py Minor dict formatting.
tests/test_optim/test_optimizer/test_optimizer_wrapper.py Formatting and Amp/Apex wrapper test tweaks.
tests/test_optim/test_optimizer/test_optimizer.py Formatting across optimizer tests.
tests/test_model/test_wrappers/test_model_wrapper.py Skip condition and formatting changes.
tests/test_model/test_test_aug_time.py Dataloader instantiation formatting; cfg edits.
tests/test_model/test_model_utils.py Skipif clause formatting.
tests/test_model/test_efficient_conv_bn_eval.py Skipif clause formatting.
tests/test_model/test_base_module.py Model config dict formatting; minor logger init change.
tests/test_model/test_base_model/test_data_preprocessor.py Preprocessor init and test data dict formatting.
tests/test_model/test_base_model/test_base_model.py Minor variable dict formatting.
tests/test_model/test_averaged_model.py Minor sequential model formatting.
tests/test_logging/test_message_hub.py Formatting and OrderedDict usage updates.
tests/test_logging/test_logger.py Logger initialization and handler config formatting.
tests/test_infer/test_infer.py Formatting and list chunking test updates.
tests/test_hub/test_hub.py Skipif formatting and pretrained config test tweaks.
tests/test_hooks/test_sync_buffers_hook.py Process group init formatting.
tests/test_hooks/test_runtime_info_hook.py Formatting and dict updates.
tests/test_hooks/test_profiler_hook.py Profiler hook config formatting; minor assertions.
tests/test_hooks/test_prepare_tta_hook.py Hook config formatting.
tests/test_hooks/test_naive_visualization_hook.py Metainfo dict formatting.
tests/test_hooks/test_logger_hook.py LoggerHook init and assertions formatting.
tests/test_hooks/test_empty_cache_hook.py Skipif condition formatting.
tests/test_hooks/test_ema_hook.py Formatting; checkpoint load assertions.
tests/test_hooks/test_early_stopping_hook.py Formatting; EarlyStoppingHook config updates.
tests/test_hooks/test_checkpoint_hook.py Extensive formatting; messages and path checks.
tests/test_fileio/test_io.py File backend singleton tests formatting.
tests/test_fileio/test_fileio.py HTTP/Petrel backend mocks and list/dict from file tests formatting.
tests/test_fileio/test_fileclient.py Disk/Petrel backend list_dir_or_file tests formatting; comments.
tests/test_fileio/test_backends/test_petrel_backend.py Petrel backend utilities and IO tests formatting.
tests/test_fileio/test_backends/test_backend_utils.py Backend registration tests formatting.
tests/test_evaluator/test_metric.py DumpResults init validation formatting.
tests/test_evaluator/test_evaluator.py Evaluator process/evaluate formatting.
tests/test_dataset/test_sampler.py Assertion formatting.
tests/data/config/lazy_module_config/test_ast_transform.py Replace numpy.compat import with numpy.fft.
mmengine/structures/instance_data.py Add a blank line to separate import groups.
Comments suppressed due to low confidence (6)

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant