Skip to content
This repository has been archived by the owner on Oct 28, 2024. It is now read-only.

Commit

Permalink
v1.1.2
Browse files Browse the repository at this point in the history
  • Loading branch information
mtang committed Feb 8, 2024
1 parent 1a5d688 commit 65de923
Show file tree
Hide file tree
Showing 9 changed files with 521 additions and 335 deletions.
428 changes: 95 additions & 333 deletions README.md

Large diffs are not rendered by default.

5 changes: 5 additions & 0 deletions changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,11 @@

# pydantic2-resolve

## v.1.1.2 (2024.02.08)

- add `global_loader_filter` for convinence. (thanks Dambre)
- add `model_config` for better schema definition (same as output) and hide fields.

## v.1.1.1 (2023.12.20)

- fix corner case of empty list input. `tests/core/test_input.py`
Expand Down
243 changes: 243 additions & 0 deletions examples/readme_demo/readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,243 @@
## Introduction

Assume we have 3 tables: `departments`, `teams` and `members`, which have `1:N relationship` from left to right.

```mermaid
erDiagram
Department ||--o{ Team : has
Team ||--o{ Member : has
Department {
int id
string name
}
Team {
int id
int department_id
string name
}
Member {
int id
int team_id
string name
}
```

```python
departments = [
dict(id=1, name='INFRA'),
dict(id=2, name='DevOps'),
dict(id=3, name='Sales'),
]

teams = [
dict(id=1, department_id=1, name="K8S"),
dict(id=2, department_id=1, name="MONITORING"),
# ...
dict(id=10, department_id=2, name="Operation"),
]

members = [
dict(id=1, team_id=1, name="Sophia"),
# ...
dict(id=19, team_id=10, name="Emily"),
dict(id=20, team_id=10, name="Ella")
]
```

and we want to generate nested json base on these 3 tables. the output should be looks like:

```json
{
"departments": [
{
"id": 1,
"name": "INFRA",
"teams": [
{
"id": 1,
"name": "K8S",
"members": [
{
"id": 1,
"name": "Sophia"
}
]
}
]
}
]
}
```

We will shows how to make it with `pydantic-resolve` which has 4 steps:

1. define dataloader
2. define pydantic schema, use dataloaders (no N+1 query)
3. resolve

```python
import json
import asyncio
from typing import List
from pydantic import BaseModel
from pydantic2_resolve import Resolver, LoaderDepend, build_list

# 0. prepare table records
departments = [
dict(id=1, name='INFRA'),
dict(id=2, name='DevOps'),
dict(id=3, name='Sales'),
]

teams = [
dict(id=1, department_id=1, name="K8S"),
dict(id=2, department_id=1, name="MONITORING"),
dict(id=3, department_id=1, name="Jenkins"),
dict(id=5, department_id=2, name="Frontend"),
dict(id=6, department_id=2, name="Bff"),
dict(id=7, department_id=2, name="Backend"),
dict(id=8, department_id=3, name="CAT"),
dict(id=9, department_id=3, name="Account"),
dict(id=10, department_id=3, name="Operation"),
]

members = [
dict(id=1, team_id=1, name="Sophia"),
dict(id=2, team_id=1, name="Jackson"),
dict(id=3, team_id=2, name="Olivia"),
dict(id=4, team_id=2, name="Liam"),
dict(id=5, team_id=3, name="Emma"),
dict(id=6, team_id=4, name="Noah"),
dict(id=7, team_id=5, name="Ava"),
dict(id=8, team_id=6, name="Lucas"),
dict(id=9, team_id=6, name="Isabella"),
dict(id=10, team_id=6, name="Mason"),
dict(id=11, team_id=7, name="Mia"),
dict(id=12, team_id=8, name="Ethan"),
dict(id=13, team_id=8, name="Amelia"),
dict(id=14, team_id=9, name="Oliver"),
dict(id=15, team_id=9, name="Charlotte"),
dict(id=16, team_id=10, name="Jacob"),
dict(id=17, team_id=10, name="Abigail"),
dict(id=18, team_id=10, name="Daniel"),
dict(id=19, team_id=10, name="Emily"),
dict(id=20, team_id=10, name="Ella")
]

# 1. define dataloader
async def teams_batch_load_fn(department_ids):
""" return teams grouped by department_id """
# visit [aiodataloader](https://github.com/syrusakbary/aiodataloader) to know how to define `DataLoader`

dct = defaultdict(list)
_teams = team_service.batch_query_by_department_ids(department_ids) # assume data is exposed by service
for team in _teams:
dct[team['department_id']].append(team)

return [dct.get(did, []) for did in department_ids]

async def members_batch_load_fn(team_ids):
""" return members grouped by team_id """
_members = member_service.batch_query_by_team_ids(team_ids)

return build_list(_members, team_ids, lambda t: t['team_id']) # helper func

# 2. define pydantic schemas
class Member(BaseModel):
id: int
name: str

class Team(BaseModel):
id: int
name: str

members: List[Member] = []
def resolve_members(self, loader=LoaderDepend(members_batch_load_fn)):
return loader.load(self.id)

member_count: int = 0
def post_member_count(self):
return len(self.members)

class Department(BaseModel):
id: int
name: str
teams: List[Team] = []
def resolve_teams(self, loader=LoaderDepend(teams_batch_load_fn)):
return loader.load(self.id)

member_count: int = 0
def post_member_count(self):
return sum([team.member_count for team in self.teams])

class Result(BaseModel):
departments: List[Department] = []
def resolve_departments(self):
return departments

# 3. resolve
async def main():
result = Result()
data = await Resolver().resolve(result)
print(json.dumps(data.model_dump(), indent=4))

asyncio.run(main())
```

then we got the output (display the first item for demostration)

```json
{
"departments": [
{
"id": 1,
"name": "INFRA",
"member_count": 5,
"teams": [
{
"id": 1,
"name": "K8S",
"member_count": 2,
"members": [
{
"id": 1,
"name": "Sophia"
},
{
"id": 2,
"name": "Jackson"
}
]
},
{
"id": 2,
"name": "MONITORING",
"member_count": 2,
"members": [
{
"id": 3,
"name": "Olivia"
},
{
"id": 4,
"name": "Liam"
}
]
},
{
"id": 3,
"name": "Jenkins",
"member_count": 1,
"members": [
{
"id": 5,
"name": "Emma"
}
]
}
]
}
]
}
```
3 changes: 3 additions & 0 deletions pydantic2_resolve/exceptions.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,7 @@ class LoaderFieldNotProvidedError(Exception):
pass

class MissingAnnotationError(Exception):
pass

class GlobalLoaderFieldOverlappedError(Exception):
pass
9 changes: 8 additions & 1 deletion pydantic2_resolve/resolver.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ class Resolver:
def __init__(
self,
loader_filters: Optional[Dict[Any, Dict[str, Any]]] = None,
global_loader_filter: Optional[Dict[str, Any]] = None,
loader_instances: Optional[Dict[Any, Any]] = None,
ensure_type=False,
context: Optional[Dict[str, Any]] = None
Expand All @@ -47,6 +48,10 @@ def __init__(
# for dataloader which has class attributes, you can assign the value at here
self.loader_filters = loader_filters or {}

# keys in global_loader_filter are mutually exclusive with key-value pairs in loader_filters
# eg: Resolver(global_loader_filter={'key_a': 1}, loader_filters={'key_a': 1}) will raise exception
self.global_loader_filter = global_loader_filter or {}

# now you can pass your loader instance, Resolver will check `isinstance``
if loader_instances and self._validate_loader_instance(loader_instances):
self.loader_instances = loader_instances
Expand Down Expand Up @@ -153,7 +158,9 @@ def _execute_resolver_method(self, method):
# if extra transform provides
loader = Loader()

filter_config = self.loader_filters.get(Loader, {})
filter_config = util.merge_dicts(
self.global_loader_filter,
self.loader_filters.get(Loader, {}))

for field in util.get_class_field_annotations(Loader):
# >>> 2.1.1
Expand Down
49 changes: 49 additions & 0 deletions pydantic2_resolve/util.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
from inspect import iscoroutine, isfunction
from typing import Any, DefaultDict, Sequence, Type, TypeVar, List, Callable, Optional, Mapping, Union, Iterator, Dict, get_type_hints
import pydantic2_resolve.constant as const
from pydantic2_resolve.exceptions import GlobalLoaderFieldOverlappedError
from aiodataloader import DataLoader


Expand All @@ -18,6 +19,13 @@ def get_class_field_annotations(cls: Type):
T = TypeVar("T")
V = TypeVar("V")

def merge_dicts(a: Dict[str, Any], b: Dict[str, Any]):
overlap = set(a.keys()) & set(b.keys())
if overlap:
raise GlobalLoaderFieldOverlappedError(f'loader_filters and global_loader_filter have duplicated key(s): {",".join(overlap)}')
else:
return {**a, **b}

def build_object(items: Sequence[T], keys: List[V], get_pk: Callable[[T], V]) -> Iterator[Optional[T]]:
"""
helper function to build return object data required by aiodataloader
Expand Down Expand Up @@ -48,6 +56,9 @@ def replace_method(cls: Type, cls_name: str, func_name: str, func: Callable):


def get_required_fields(kls: BaseModel):
"""
return required fields and fields that has resolve/post methods
"""
required_fields = []

# 1. get required fields
Expand Down Expand Up @@ -88,6 +99,44 @@ def schema_extra(schema: Dict[str, Any], model) -> None:
raise AttributeError(f'target class {kls.__name__} is not BaseModel')
return kls

def model_config(default_required: bool=True):
"""
in pydantic v2, we can not use __exclude_field__ to set hidden field in model_config hidden_field params
model_config now is just a simple decorator to remove fields (with exclude=True) from schema.properties
and set schema.required for better schema description.
(same like `output` decorator, you can replace output with model_config)
it keeps the form of model_config(params) in order to extend new features in future
"""
def wrapper(kls):
if issubclass(kls, BaseModel):
def build():
def _schema_extra(schema: Dict[str, Any], model) -> None:
# 1. collect exclude fields and then hide in both schema and dump (default action)
excluded_fields = [k for k, v in kls.model_fields.items() if v.exclude == True]
props = {}

# config schema properties
for k, v in schema.get('properties', {}).items():
if k not in excluded_fields:
props[k] = v
schema['properties'] = props

# config schema required (fields with default values will not be listed in required field)
# and the generated typescript models will define it as optional, and is troublesome in use
if default_required:
fnames = get_required_fields(model)
if excluded_fields:
fnames = [n for n in fnames if n not in excluded_fields]
schema['required'] = fnames

return _schema_extra

kls.model_config['json_schema_extra'] = staticmethod(build())
else:
raise AttributeError(f'target class {kls.__name__} is not BaseModel')
return kls
return wrapper

def mapper(func_or_class: Union[Callable, Type]):
"""
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "pydantic2-resolve"
version = "1.1.1"
version = "1.1.2"
description = "create nested data structure easily"
authors = ["tangkikodo <[email protected]>"]
readme = "README.md"
Expand Down
Loading

0 comments on commit 65de923

Please sign in to comment.