Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use result #191

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 15 additions & 1 deletion poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ pyyaml = ">=5.4"
loguru = ">=0.7"
typing-extensions = ">=4.6"
greenlet = ">=3"
result = "^0.17.0"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Exception-based error handling still present.

The 'result' dependency has been integrated and used in several areas. However, multiple instances of exception-based error handling (raise, except, try:) remain in the codebase. Please ensure that all error handling aligns with the PR objective of using results instead of exceptions.

🔗 Analysis chain

LGTM! Verify integration of the new 'result' dependency.

The addition of the 'result' dependency (version ^0.17.0) aligns with the PR objective of using results instead of exceptions for error handling. This is a good practice for more structured error handling.

To ensure smooth integration, please run the following script to check for any potential conflicts or issues:

This script will help identify where and how the new 'result' dependency is being used, and if there are any remaining exception-based error handling that might need to be updated.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the integration of the 'result' dependency

# Test: Check if 'result' is imported in the project files
echo "Checking for 'result' imports:"
rg --type python 'from result import' || echo "No imports found"

# Test: Look for usage of Result type or functions
echo "Checking for 'Result' usage:"
rg --type python '\bResult\b' || echo "No direct usage of 'Result' found"

# Test: Check for any remaining exception-based error handling
echo "Checking for exception-based error handling:"
rg --type python '(raise|except|try:)' || echo "No exception-based error handling found"

# Note: These checks provide an overview. Manual review of the results is recommended.

Length of output: 669


Script:

#!/bin/bash
# Description: Verify the integration of the 'result' dependency without specifying file type

# Test: Check if 'result' is imported in the project files
echo "Checking for 'result' imports:"
rg 'from result import' || echo "No imports found"

# Test: Look for usage of Result type or functions
echo "Checking for 'Result' usage:"
rg '\bResult\b' || echo "No direct usage of 'Result' found"

# Test: Check for any remaining exception-based error handling
echo "Checking for exception-based error handling:"
rg '(raise|except|try:)' || echo "No exception-based error handling found"

# Note: These checks provide an overview. Manual review of the results is recommended.

Length of output: 5369


[tool.poetry.group.dev.dependencies]
mypy = "^1.11"
Expand Down
17 changes: 13 additions & 4 deletions sqlalchemy_to_json_schema/command/driver.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
from typing import Any, Callable, Optional, Union, cast

import yaml
from result import Err, Ok, Result
from sqlalchemy.ext.declarative import DeclarativeMeta

from sqlalchemy_to_json_schema.command.transformer import (
Expand Down Expand Up @@ -57,7 +58,9 @@ def __init__(self, walker: Walker, decision: Decision, layout: Layout, /):

def build_transformer(
self, walker: Walker, decision: Decision, layout: Layout, /
) -> Callable[[Iterable[Union[ModuleType, DeclarativeMeta]], Optional[int]], Schema]:
) -> Callable[
[Iterable[Union[ModuleType, DeclarativeMeta]], Optional[int]], Result[Schema, str]
]:
walker_factory = WALKER_MAP[walker]
relation_decision = DECISION_MAP[decision]()
schema_factory = SchemaFactory(walker_factory, relation_decision=relation_decision)
Expand All @@ -73,7 +76,7 @@ def run(
filename: Optional[Path] = None,
format: Optional[Format] = None,
depth: Optional[int] = None,
) -> None:
) -> Result[None, str]:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

LGTM: Improved error handling in run method

The run method has been updated to return a Result[None, str] and now includes proper error handling for the self.transformer call. This is a significant improvement in error management.

However, there's still room for improvement in handling potential errors from self.dump:

Consider updating the dump method to return a Result type and handle its potential errors in the run method. This would provide consistent error handling throughout the class. Here's a suggested implementation:

def dump(self, ...) -> Result[None, str]:
    try:
        # Existing implementation
        return Ok(None)
    except Exception as e:
        return Err(str(e))

def run(self, ...):
    # Existing implementation
    dump_result = self.dump(schema.unwrap(), filename=filename, format=format)
    if dump_result.is_err():
        return Err(dump_result.unwrap_err())
    return Ok(None)

This change would ensure that any errors occurring during the dump process are also captured and propagated correctly.

Also applies to: 90-97

modules_and_types = (load_module_or_symbol(target) for target in targets)
modules_and_models = cast(
Iterator[Union[ModuleType, DeclarativeMeta]],
Expand All @@ -84,8 +87,14 @@ def run(
),
)

result = self.transformer(modules_and_models, depth)
self.dump(result, filename=filename, format=format)
schema = self.transformer(modules_and_models, depth)

if schema.is_err():
return Err(schema.unwrap_err())

self.dump(schema.unwrap(), filename=filename, format=format)

return Ok(None)
Comment on lines +90 to +97
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider handling potential exceptions from the dump method

While the run method now handles errors from self.transformer, consider that self.dump might raise exceptions (e.g., file I/O errors). To maintain consistent error handling, you could modify dump to return a Result type or handle exceptions within run.

Proposed Change to dump:

Modify the dump method to return a Result[None, str]:

 def dump(
     self,
     data: dict[str, Any],
     /,
     *,
     filename: Optional[Path] = None,
     format: Optional[Format] = None,
- ) -> None:
+ ) -> Result[None, str]:
     try:
         dump_function = yaml.dump if format == Format.YAML else json.dump

         if filename is None:
             output_stream = sys.stdout
         else:
             output_stream = filename.open("w")

         dump_function(data, output_stream)  # type: ignore[operator]
+        return Ok(None)
     except Exception as e:
+        return Err(str(e))

Update run to Handle the Result from dump:

 schema = self.transformer(modules_and_models, depth)

 if schema.is_err():
     return Err(schema.unwrap_err())

- self.dump(schema.unwrap(), filename=filename, format=format)
+ dump_result = self.dump(schema.unwrap(), filename=filename, format=format)
+ if dump_result.is_err():
+     return Err(dump_result.unwrap_err())

 return Ok(None)

This ensures that any errors from dump are captured and propagated correctly.

Committable suggestion was skipped due to low confidence.


def dump(
self,
Expand Down
81 changes: 59 additions & 22 deletions sqlalchemy_to_json_schema/command/transformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
from typing import Optional, Union

from loguru import logger
from result import Err, Ok, Result
from sqlalchemy.ext.declarative import DeclarativeMeta
from typing_extensions import TypeGuard

Expand All @@ -18,13 +19,13 @@ def __init__(self, schema_factory: SchemaFactory, /):
@abstractmethod
def transform(
self, rawtargets: Iterable[Union[ModuleType, DeclarativeMeta]], depth: Optional[int], /
) -> Schema: ...
) -> Result[Schema, str]: ...


class JSONSchemaTransformer(AbstractTransformer):
def transform(
self, rawtargets: Iterable[Union[ModuleType, DeclarativeMeta]], depth: Optional[int], /
) -> Schema:
) -> Result[Schema, str]:
definitions = {}

for item in rawtargets:
Expand All @@ -33,33 +34,46 @@ def transform(
elif inspect.ismodule(item):
partial_definitions = self.transform_by_module(item, depth)
else:
TypeError(f"Expected a class or module, got {item}")
return Err(f"Expected a class or module, got {item}")

definitions.update(partial_definitions)
if partial_definitions.is_err():
return partial_definitions

return definitions
definitions.update(partial_definitions.unwrap())

def transform_by_model(self, model: DeclarativeMeta, depth: Optional[int], /) -> Schema:
return Ok(definitions)

def transform_by_model(
self, model: DeclarativeMeta, depth: Optional[int], /
) -> Result[Schema, str]:
return self.schema_factory(model, depth=depth)

def transform_by_module(self, module: ModuleType, depth: Optional[int], /) -> Schema:
def transform_by_module(
self, module: ModuleType, depth: Optional[int], /
) -> Result[Schema, str]:
subdefinitions = {}
definitions = {}
for basemodel in collect_models(module):
schema = self.schema_factory(basemodel, depth=depth)
schema_result = self.schema_factory(basemodel, depth=depth)

if schema_result.is_err():
return schema_result

schema = schema_result.unwrap()

Comment on lines +58 to +63
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Refactor duplicated code into a shared method

The code handling schema_result in transform_by_module methods of JSONSchemaTransformer and OpenAPI2Transformer is duplicated:

schema_result = self.schema_factory(basemodel, depth=depth)

if schema_result.is_err():
    return schema_result

schema = schema_result.unwrap()

Consider refactoring this logic into a shared helper method or moving it to the AbstractTransformer class to reduce duplication and enhance maintainability.

Also applies to: 119-124

if "definitions" in schema:
subdefinitions.update(schema.pop("definitions"))
definitions[schema["title"]] = schema
d = {}
d.update(subdefinitions)
d.update(definitions)
return {"definitions": definitions}
return Ok({"definitions": definitions})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Inconsistent wrapping of 'definitions' in the return statement

In transform_by_module, you're returning Ok({"definitions": definitions}), which wraps the definitions inside another definitions key. However, in other transformers, you return Ok(definitions) directly. This inconsistency could lead to confusion or errors when the output is used elsewhere. Consider standardizing the return structure across all transformers.

Apply this diff to align the return statement with other methods:

-    return Ok({"definitions": definitions})
+    return Ok(definitions)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return Ok({"definitions": definitions})
return Ok(definitions)



class OpenAPI2Transformer(AbstractTransformer):
def transform(
self, rawtargets: Iterable[Union[ModuleType, DeclarativeMeta]], depth: Optional[int], /
) -> Schema:
) -> Result[Schema, str]:
definitions = {}

for target in rawtargets:
Expand All @@ -68,29 +82,46 @@ def transform(
elif inspect.ismodule(target):
partial_definitions = self.transform_by_module(target, depth)
else:
raise TypeError(f"Expected a class or module, got {target}")
return Err(f"Expected a class or module, got {target}")

if partial_definitions.is_err():
return partial_definitions

definitions.update(partial_definitions)
definitions.update(partial_definitions.unwrap())

return {"definitions": definitions}
return Ok({"definitions": definitions})

def transform_by_model(self, model: DeclarativeMeta, depth: Optional[int], /) -> Schema:
def transform_by_model(
self, model: DeclarativeMeta, depth: Optional[int], /
) -> Result[Schema, str]:
definitions = {}
schema = self.schema_factory(model, depth=depth)
schema_result = self.schema_factory(model, depth=depth)

if schema_result.is_err():
return schema_result

schema = schema_result.unwrap()

if "definitions" in schema:
definitions.update(schema.pop("definitions"))

definitions[schema["title"]] = schema

return definitions
return Ok(definitions)

def transform_by_module(self, module: ModuleType, depth: Optional[int], /) -> Schema:
def transform_by_module(
self, module: ModuleType, depth: Optional[int], /
) -> Result[Schema, str]:
subdefinitions = {}
definitions = {}

for basemodel in collect_models(module):
schema = self.schema_factory(basemodel, depth=depth)
schema_result = self.schema_factory(basemodel, depth=depth)

if schema_result.is_err():
return schema_result

schema = schema_result.unwrap()

if "definitions" in schema:
subdefinitions.update(schema.pop("definitions"))
Expand All @@ -101,7 +132,7 @@ def transform_by_module(self, module: ModuleType, depth: Optional[int], /) -> Sc
d.update(subdefinitions)
d.update(definitions)

return definitions
return Ok(definitions)


class OpenAPI3Transformer(OpenAPI2Transformer):
Expand All @@ -118,8 +149,13 @@ def replace_ref(self, d: Union[dict, list], old_prefix: str, new_prefix: str, /)

def transform(
self, rawtargets: Iterable[Union[ModuleType, DeclarativeMeta]], depth: Optional[int], /
) -> Schema:
definitions = super().transform(rawtargets, depth)
) -> Result[Schema, str]:
definitions_result = super().transform(rawtargets, depth)

if definitions_result.is_err():
return Err(definitions_result.unwrap_err())
Comment on lines +155 to +156
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Simplify error propagation in transform method

In OpenAPI3Transformer.transform, you unwrap the error only to wrap it again:

if definitions_result.is_err():
    return Err(definitions_result.unwrap_err())

You can simplify this by returning the definitions_result directly when an error occurs:

if definitions_result.is_err():
    return definitions_result

This approach maintains consistency with other methods and reduces unnecessary code.


definitions = definitions_result.unwrap()

self.replace_ref(definitions, "#/definitions/", "#/components/schemas/")

Expand All @@ -128,7 +164,8 @@ def transform(
if "schemas" not in definitions["components"]:
definitions["components"]["schemas"] = {}
definitions["components"]["schemas"] = definitions.pop("definitions", {})
return definitions

return Ok(definitions)


def collect_models(module: ModuleType, /) -> Iterator[DeclarativeMeta]:
Expand Down
49 changes: 30 additions & 19 deletions sqlalchemy_to_json_schema/decisions.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
from collections.abc import Iterator
from typing import Any, Union

from result import Err, Ok, Result
from sqlalchemy.orm import MapperProperty
from sqlalchemy.orm.base import MANYTOMANY, MANYTOONE
from sqlalchemy.orm.properties import ColumnProperty
Expand All @@ -24,7 +25,7 @@ def decision(
/,
*,
toplevel: bool = False,
) -> Iterator[DecisionResult]:
) -> Iterator[Result[DecisionResult, MapperProperty]]:
pass


Expand All @@ -36,13 +37,13 @@ def decision(
/,
*,
toplevel: bool = False,
) -> Iterator[DecisionResult]:
) -> Iterator[Result[DecisionResult, MapperProperty]]:
if hasattr(prop, "mapper"):
yield ColumnPropertyType.RELATIONSHIP, prop, {}
yield Ok((ColumnPropertyType.RELATIONSHIP, prop, {}))
elif hasattr(prop, "columns"):
yield ColumnPropertyType.FOREIGNKEY, prop, {}
yield Ok((ColumnPropertyType.FOREIGNKEY, prop, {}))
else:
raise NotImplementedError(prop)
yield Err(prop)


class UseForeignKeyIfPossibleDecision(AbstractDecision):
Expand All @@ -53,32 +54,42 @@ def decision(
/,
*,
toplevel: bool = False,
) -> Iterator[DecisionResult]:
) -> Iterator[Result[DecisionResult, MapperProperty]]:
if hasattr(prop, "mapper"):
if prop.direction == MANYTOONE:
if toplevel:
for c in prop.local_columns:
yield ColumnPropertyType.FOREIGNKEY, walker.mapper._props[c.name], {
"relation": prop.key
}
yield Ok(
(
ColumnPropertyType.FOREIGNKEY,
walker.mapper._props[c.name],
{"relation": prop.key},
)
)
else:
rp = walker.history[0]
if prop.local_columns != rp.remote_side:
for c in prop.local_columns:
yield ColumnPropertyType.FOREIGNKEY, walker.mapper._props[c.name], {
"relation": prop.key
}
yield Ok(
(
ColumnPropertyType.FOREIGNKEY,
walker.mapper._props[c.name],
{"relation": prop.key},
)
)
elif prop.direction == MANYTOMANY:
# logger.warning("skip mapper=%s, prop=%s is many to many.", walker.mapper, prop)
# fixme: this must return a ColumnPropertyType member
yield (
{"type": "array", "items": {"type": "string"}}, # type: ignore[misc]
prop,
{},
yield Ok(
( # type: ignore[arg-type]
{"type": "array", "items": {"type": "string"}},
prop,
{},
)
Comment on lines +83 to +88
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix type inconsistency in yield statement for MANYTOMANY relationship

In the UseForeignKeyIfPossibleDecision.decision method, within the elif prop.direction == MANYTOMANY block, the yield statement returns a tuple where the first element is a dictionary ({"type": "array", "items": {"type": "string"}}). However, the DecisionResult expects the first element to be of type ColumnPropertyType. This inconsistency necessitates the # type: ignore[arg-type] comment.

Consider defining a new member in ColumnPropertyType to represent this case (e.g., ColumnPropertyType.ARRAY) and adjust the yield statement accordingly. This will ensure type consistency and eliminate the need for the type ignore comment.

Apply this diff to correct the issue:

-                    yield Ok(
-                        (  # type: ignore[arg-type]
-                            {"type": "array", "items": {"type": "string"}},
-                            prop,
-                            {},
-                        )
-                    )
+                    yield Ok(
+                        (
+                            ColumnPropertyType.ARRAY,
+                            prop,
+                            {"type": "array", "items": {"type": "string"}},
+                        )
+                    )

Committable suggestion was skipped due to low confidence.

)
else:
yield ColumnPropertyType.RELATIONSHIP, prop, {}
yield Ok((ColumnPropertyType.RELATIONSHIP, prop, {}))
elif hasattr(prop, "columns"):
yield ColumnPropertyType.FOREIGNKEY, prop, {}
yield Ok((ColumnPropertyType.FOREIGNKEY, prop, {}))
else:
raise NotImplementedError(prop)
yield Err(prop)
Loading