Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apply ruff==0.9.0 formatting #2313

Merged
merged 2 commits into from
Jan 10, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ YOLOv3 has been designed to be super easy to get started and simple to learn. We
<summary>Figure Notes</summary>

- **COCO AP val** denotes [email protected]:0.95 metric measured on the 5000-image [COCO val2017](http://cocodataset.org) dataset over various inference sizes from 256 to 1536.
- **GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100 instance at batch-size 32.
- **GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p4/) V100 instance at batch-size 32.
- **EfficientDet** data from [google/automl](https://github.com/google/automl) at batch size 8.
- **Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`

Expand All @@ -234,7 +234,7 @@ YOLOv3 has been designed to be super easy to get started and simple to learn. We

- All checkpoints are trained to 300 epochs with default settings. Nano and Small models use [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) hyps, all others use [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml).
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.<br>Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
- **Speed** averaged over COCO val images using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) instance. NMS times (~1 ms/img) not included.<br>Reproduce by `python val.py --data coco.yaml --img 640 --task speed --batch 1`
- **Speed** averaged over COCO val images using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p4/) instance. NMS times (~1 ms/img) not included.<br>Reproduce by `python val.py --data coco.yaml --img 640 --task speed --batch 1`
- **TTA** [Test Time Augmentation](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/) includes reflection and scale augmentations.<br>Reproduce by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`

</details>
Expand Down
4 changes: 2 additions & 2 deletions README.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ YOLOv3 超级容易上手,简单易学。我们优先考虑现实世界的结
<summary>图表笔记</summary>

- **COCO AP val** 表示 [email protected]:0.95 指标,在 [COCO val2017](http://cocodataset.org) 数据集的 5000 张图像上测得, 图像包含 256 到 1536 各种推理大小。
- **显卡推理速度** 为在 [COCO val2017](http://cocodataset.org) 数据集上的平均推理时间,使用 [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100实例,batchsize 为 32 。
- **显卡推理速度** 为在 [COCO val2017](http://cocodataset.org) 数据集上的平均推理时间,使用 [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p4/) V100实例,batchsize 为 32 。
- **EfficientDet** 数据来自 [google/automl](https://github.com/google/automl) , batchsize 为32。
- **复现命令** 为 `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`

Expand All @@ -234,7 +234,7 @@ YOLOv3 超级容易上手,简单易学。我们优先考虑现实世界的结

- 所有模型都使用默认配置,训练 300 epochs。n和s模型使用 [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) ,其他模型都使用 [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml) 。
- \*\*mAP<sup>val</sup>\*\*在单模型单尺度上计算,数据集使用 [COCO val2017](http://cocodataset.org) 。<br>复现命令 `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
- **推理速度**在 COCO val 图像总体时间上进行平均得到,测试环境使用[AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/)实例。 NMS 时间 (大约 1 ms/img) 不包括在内。<br>复现命令 `python val.py --data coco.yaml --img 640 --task speed --batch 1`
- **推理速度**在 COCO val 图像总体时间上进行平均得到,测试环境使用[AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p4/)实例。 NMS 时间 (大约 1 ms/img) 不包括在内。<br>复现命令 `python val.py --data coco.yaml --img 640 --task speed --batch 1`
- **TTA** [测试时数据增强](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/) 包括反射和尺度变换。<br>复现命令 `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`

</details>
Expand Down
2 changes: 1 addition & 1 deletion classify/predict.py
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ def run(
vid_writer[i].write(im0)

# Print time (inference-only)
LOGGER.info(f"{s}{dt[1].dt * 1E3:.1f}ms")
LOGGER.info(f"{s}{dt[1].dt * 1e3:.1f}ms")

# Print results
t = tuple(x.t / seen * 1e3 for x in dt) # speeds per image
Expand Down
16 changes: 8 additions & 8 deletions classify/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -203,10 +203,10 @@ def lf(x):
scaler = amp.GradScaler(enabled=cuda)
val = test_dir.stem # 'val' or 'test'
LOGGER.info(
f'Image sizes {imgsz} train, {imgsz} test\n'
f'Using {nw * WORLD_SIZE} dataloader workers\n'
f"Image sizes {imgsz} train, {imgsz} test\n"
f"Using {nw * WORLD_SIZE} dataloader workers\n"
f"Logging results to {colorstr('bold', save_dir)}\n"
f'Starting {opt.model} training on {data} dataset with {nc} classes for {epochs} epochs...\n\n'
f"Starting {opt.model} training on {data} dataset with {nc} classes for {epochs} epochs...\n\n"
f"{'Epoch':>10}{'GPU_mem':>10}{'train_loss':>12}{f'{val}_loss':>12}{'top1_acc':>12}{'top5_acc':>12}"
)
for epoch in range(epochs): # loop over the dataset multiple times
Expand Down Expand Up @@ -292,13 +292,13 @@ def lf(x):
# Train complete
if RANK in {-1, 0} and final_epoch:
LOGGER.info(
f'\nTraining complete ({(time.time() - t0) / 3600:.3f} hours)'
f"\nTraining complete ({(time.time() - t0) / 3600:.3f} hours)"
f"\nResults saved to {colorstr('bold', save_dir)}"
f'\nPredict: python classify/predict.py --weights {best} --source im.jpg'
f'\nValidate: python classify/val.py --weights {best} --data {data_dir}'
f'\nExport: python export.py --weights {best} --include onnx'
f"\nPredict: python classify/predict.py --weights {best} --source im.jpg"
f"\nValidate: python classify/val.py --weights {best} --data {data_dir}"
f"\nExport: python export.py --weights {best} --include onnx"
f"\nPyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', '{best}')"
f'\nVisualize: https://netron.app\n'
f"\nVisualize: https://netron.app\n"
)

# Plot examples
Expand Down
2 changes: 1 addition & 1 deletion detect.py
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,7 @@ def run(
vid_writer[i].write(im0)

# Print time (inference-only)
LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms")
LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1e3:.1f}ms")

# Print results
t = tuple(x.t / seen * 1e3 for x in dt) # speeds per image
Expand Down
4 changes: 2 additions & 2 deletions export.py
Original file line number Diff line number Diff line change
Expand Up @@ -1498,12 +1498,12 @@ def run(
else ""
)
LOGGER.info(
f'\nExport complete ({time.time() - t:.1f}s)'
f"\nExport complete ({time.time() - t:.1f}s)"
f"\nResults saved to {colorstr('bold', file.parent.resolve())}"
f"\nDetect: python {dir / ('detect.py' if det else 'predict.py')} --weights {f[-1]} {h}"
f"\nValidate: python {dir / 'val.py'} --weights {f[-1]} {h}"
f"\nPyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', '{f[-1]}') {s}"
f'\nVisualize: https://netron.app'
f"\nVisualize: https://netron.app"
)
return f # return list of exported files/dirs

Expand Down
2 changes: 1 addition & 1 deletion segment/predict.py
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ def run(
vid_writer[i].write(im0)

# Print time (inference-only)
LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms")
LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1e3:.1f}ms")

# Print results
t = tuple(x.t / seen * 1e3 for x in dt) # speeds per image
Expand Down
12 changes: 6 additions & 6 deletions segment/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -323,10 +323,10 @@ def lf(x):
compute_loss = ComputeLoss(model, overlap=overlap) # init loss class
# callbacks.run('on_train_start')
LOGGER.info(
f'Image sizes {imgsz} train, {imgsz} val\n'
f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n'
f"Image sizes {imgsz} train, {imgsz} val\n"
f"Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n"
f"Logging results to {colorstr('bold', save_dir)}\n"
f'Starting training for {epochs} epochs...'
f"Starting training for {epochs} epochs..."
)
for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------
# callbacks.run('on_train_epoch_start')
Expand Down Expand Up @@ -403,7 +403,7 @@ def lf(x):
# Log
if RANK in {-1, 0}:
mloss = (mloss * i + loss_items) / (i + 1) # update mean losses
mem = f"{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G" # (GB)
mem = f"{torch.cuda.memory_reserved() / 1e9 if torch.cuda.is_available() else 0:.3g}G" # (GB)
pbar.set_description(
("%11s" * 2 + "%11.4g" * 6)
% (f"{epoch}/{epochs - 1}", mem, *mloss, targets.shape[0], imgs.shape[-1])
Expand Down Expand Up @@ -736,9 +736,9 @@ def main(opt, callbacks=Callbacks()):
# Plot results
plot_evolve(evolve_csv)
LOGGER.info(
f'Hyperparameter evolution finished {opt.evolve} generations\n'
f"Hyperparameter evolution finished {opt.evolve} generations\n"
f"Results saved to {colorstr('bold', save_dir)}\n"
f'Usage example: $ python train.py --hyp {evolve_yaml}'
f"Usage example: $ python train.py --hyp {evolve_yaml}"
)


Expand Down
12 changes: 6 additions & 6 deletions train.py
Original file line number Diff line number Diff line change
Expand Up @@ -359,10 +359,10 @@ def lf(x):
compute_loss = ComputeLoss(model) # init loss class
callbacks.run("on_train_start")
LOGGER.info(
f'Image sizes {imgsz} train, {imgsz} val\n'
f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n'
f"Image sizes {imgsz} train, {imgsz} val\n"
f"Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n"
f"Logging results to {colorstr('bold', save_dir)}\n"
f'Starting training for {epochs} epochs...'
f"Starting training for {epochs} epochs..."
)
for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------
callbacks.run("on_train_epoch_start")
Expand Down Expand Up @@ -436,7 +436,7 @@ def lf(x):
# Log
if RANK in {-1, 0}:
mloss = (mloss * i + loss_items) / (i + 1) # update mean losses
mem = f"{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G" # (GB)
mem = f"{torch.cuda.memory_reserved() / 1e9 if torch.cuda.is_available() else 0:.3g}G" # (GB)
pbar.set_description(
("%11s" * 2 + "%11.4g" * 5)
% (f"{epoch}/{epochs - 1}", mem, *mloss, targets.shape[0], imgs.shape[-1])
Expand Down Expand Up @@ -805,9 +805,9 @@ def main(opt, callbacks=Callbacks()):
# Plot results
plot_evolve(evolve_csv)
LOGGER.info(
f'Hyperparameter evolution finished {opt.evolve} generations\n'
f"Hyperparameter evolution finished {opt.evolve} generations\n"
f"Results saved to {colorstr('bold', save_dir)}\n"
f'Usage example: $ python train.py --hyp {evolve_yaml}'
f"Usage example: $ python train.py --hyp {evolve_yaml}"
)


Expand Down
10 changes: 1 addition & 9 deletions utils/augmentations.py
Original file line number Diff line number Diff line change
Expand Up @@ -197,15 +197,7 @@ def random_perspective(
else: # affine
im = cv2.warpAffine(im, M[:2], dsize=(width, height), borderValue=(114, 114, 114))

# Visualize
# import matplotlib.pyplot as plt
# ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel()
# ax[0].imshow(im[:, :, ::-1]) # base
# ax[1].imshow(im2[:, :, ::-1]) # warped

# Transform label coordinates
n = len(targets)
if n:
if n := len(targets):
use_segments = any(x.any() for x in segments) and len(segments) == n
new = np.zeros((n, 4))
if use_segments: # warp segments
Expand Down
13 changes: 5 additions & 8 deletions utils/dataloaders.py
Original file line number Diff line number Diff line change
Expand Up @@ -317,8 +317,7 @@ def __init__(self, path, img_size=640, stride=32, auto=True, transforms=None, vi
else:
self.cap = None
assert self.nf > 0, (
f"No images or videos found in {p}. "
f"Supported formats are:\nimages: {IMG_FORMATS}\nvideos: {VID_FORMATS}"
f"No images or videos found in {p}. Supported formats are:\nimages: {IMG_FORMATS}\nvideos: {VID_FORMATS}"
)

def __iter__(self):
Expand Down Expand Up @@ -667,8 +666,8 @@ def check_cache_ram(self, safety_margin=0.1, prefix=""):
cache = mem_required * (1 + safety_margin) < mem.available # to cache or not to cache, that is the question
if not cache:
LOGGER.info(
f'{prefix}{mem_required / gb:.1f}GB RAM required, '
f'{mem.available / gb:.1f}/{mem.total / gb:.1f}GB available, '
f"{prefix}{mem_required / gb:.1f}GB RAM required, "
f"{mem.available / gb:.1f}/{mem.total / gb:.1f}GB available, "
f"{'caching images ✅' if cache else 'not caching images ⚠️'}"
)
return cache
Expand Down Expand Up @@ -730,8 +729,7 @@ def __getitem__(self, index):
index = self.indices[index] # linear, shuffled, or image_weights

hyp = self.hyp
mosaic = self.mosaic and random.random() < hyp["mosaic"]
if mosaic:
if mosaic := self.mosaic and random.random() < hyp["mosaic"]:
# Load mosaic
img, labels = self.load_mosaic(index)
shapes = None
Expand Down Expand Up @@ -1109,8 +1107,7 @@ def verify_image_label(args):
segments = [np.array(x[1:], dtype=np.float32).reshape(-1, 2) for x in lb] # (cls, xy1...)
lb = np.concatenate((classes.reshape(-1, 1), segments2boxes(segments)), 1) # (cls, xywh)
lb = np.array(lb, dtype=np.float32)
nl = len(lb)
if nl:
if nl := len(lb):
assert lb.shape[1] == 5, f"labels require 5 columns, {lb.shape[1]} columns detected"
assert (lb >= 0).all(), f"negative label values {lb[lb < 0]}"
assert (lb[:, 1:] <= 1).all(), f"non-normalized or out of bounds coordinates {lb[:, 1:][lb[:, 1:] > 1]}"
Expand Down
9 changes: 4 additions & 5 deletions utils/general.py
Original file line number Diff line number Diff line change
Expand Up @@ -161,8 +161,7 @@ def user_config_dir(dir="Ultralytics", env_var="YOLOV5_CONFIG_DIR"):
"""Returns user configuration directory path, prefers `env_var` if set, else uses OS-specific path, creates
directory if needed.
"""
env = os.getenv(env_var)
if env:
if env := os.getenv(env_var):
path = Path(env) # use environment variable
else:
cfg = {"Windows": "AppData/Roaming", "Linux": ".config", "Darwin": "Library/Application Support"} # 3 OS dirs
Expand Down Expand Up @@ -493,9 +492,9 @@ def check_file(file, suffix=""):
assert Path(file).exists() and Path(file).stat().st_size > 0, f"File download failed: {url}" # check
return file
elif file.startswith("clearml://"): # ClearML Dataset ID
assert (
"clearml" in sys.modules
), "ClearML is not installed, so cannot use ClearML dataset. Try running 'pip install clearml'."
assert "clearml" in sys.modules, (
"ClearML is not installed, so cannot use ClearML dataset. Try running 'pip install clearml'."
)
return file
else: # search
files = []
Expand Down
8 changes: 3 additions & 5 deletions utils/loggers/clearml/clearml_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,11 +39,9 @@ def construct_dataset(clearml_info_string):
with open(yaml_filenames[0]) as f:
dataset_definition = yaml.safe_load(f)

assert set(
dataset_definition.keys()
).issuperset(
{"train", "test", "val", "nc", "names"}
), "The right keys were not found in the yaml file, make sure it at least has the following keys: ('train', 'test', 'val', 'nc', 'names')"
assert set(dataset_definition.keys()).issuperset({"train", "test", "val", "nc", "names"}), (
"The right keys were not found in the yaml file, make sure it at least has the following keys: ('train', 'test', 'val', 'nc', 'names')"
)

data_dict = {
"train": (
Expand Down
4 changes: 1 addition & 3 deletions utils/loggers/comet/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,14 +86,12 @@ def __init__(self, opt, hyp, run_id=None, job_type="Training", **experiment_kwar
self.upload_dataset = self.opt.upload_dataset or COMET_UPLOAD_DATASET
self.resume = self.opt.resume

# Default parameters to pass to Experiment objects
self.default_experiment_kwargs = {
"log_code": False,
"log_env_gpu": True,
"log_env_cpu": True,
"project_name": COMET_PROJECT_NAME,
}
self.default_experiment_kwargs.update(experiment_kwargs)
} | experiment_kwargs
self.experiment = self._get_experiment(self.comet_mode, run_id)
self.experiment.set_name(self.opt.name)

Expand Down
2 changes: 1 addition & 1 deletion utils/loggers/wandb/wandb_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
RANK = int(os.getenv("RANK", -1))
DEPRECATION_WARNING = (
f"{colorstr('wandb')}: WARNING ⚠️ wandb is deprecated and will be removed in a future release. "
f'See supported integrations at https://github.com/ultralytics/yolov5#integrations.'
f"See supported integrations at https://github.com/ultralytics/yolov5#integrations."
)

try:
Expand Down
7 changes: 1 addition & 6 deletions utils/loss.py
Original file line number Diff line number Diff line change
Expand Up @@ -150,8 +150,7 @@ def __call__(self, p, targets): # predictions, targets
b, a, gj, gi = indices[i] # image, anchor, gridy, gridx
tobj = torch.zeros(pi.shape[:4], dtype=pi.dtype, device=self.device) # target obj

n = b.shape[0] # number of targets
if n:
if n := b.shape[0]:
# pxy, pwh, _, pcls = pi[b, a, gj, gi].tensor_split((2, 4, 5), dim=1) # faster, requires torch 1.8.0
pxy, pwh, _, pcls = pi[b, a, gj, gi].split((2, 2, 1, self.nc), 1) # target-subset of predictions

Expand All @@ -177,10 +176,6 @@ def __call__(self, p, targets): # predictions, targets
t[range(n), tcls[i]] = self.cp
lcls += self.BCEcls(pcls, t) # BCE

# Append targets to text file
# with open('targets.txt', 'a') as file:
# [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]

obji = self.BCEobj(pi[..., 4], tobj)
lobj += obji * self.balance[i] # obj loss
if self.autobalance:
Expand Down
10 changes: 1 addition & 9 deletions utils/segment/augmentations.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,16 +67,8 @@ def random_perspective(
else: # affine
im = cv2.warpAffine(im, M[:2], dsize=(width, height), borderValue=(114, 114, 114))

# Visualize
# import matplotlib.pyplot as plt
# ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel()
# ax[0].imshow(im[:, :, ::-1]) # base
# ax[1].imshow(im2[:, :, ::-1]) # warped

# Transform label coordinates
n = len(targets)
new_segments = []
if n:
if n := len(targets):
new = np.zeros((n, 4))
segments = resample_segments(segments) # upsample
for i, segment in enumerate(segments):
Expand Down
Loading
Loading