Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Actual serverless aws lambda support #162

Closed
naturalethic opened this issue Oct 4, 2018 · 14 comments
Closed

Actual serverless aws lambda support #162

naturalethic opened this issue Oct 4, 2018 · 14 comments
Labels
Question ❔ Not future request, proposal or bug issue Solved ✔️ The issue has been solved

Comments

@naturalethic
Copy link

naturalethic commented Oct 4, 2018

Getting this going on serverless has been a time sink, it would be nice if this worked 'out-of-the-box'. There are two main issues I've encountered, the first leading to the second.

These issues stem from however the types are being registered in graphql, and these symptoms don't appear when using standard techniques, but only when introducing type-graphql.

  1. As described here (Using this in a serverless environment (e.g. apollo-lambda-server) #96) you can only run buildSchema once. Thus the solution is to set it to a global and check that a schema exists or not before building it.

  2. This now breaks using serverless-offline when one makes changes to their schema, offline will re-compile and reload the typescript, but because of the workaround in point 1 above, the schema will not be re-built. This requires one to manually kill the offline process and restart it.

Now, I don't know at this point how the internals work, but my instinct is that there should be a means of re-compiling the schema without killing the process, by clearing out any types wherever they are cached.

It seems wrong that two different schemas should be sharing definitions anyway. Why are types globally cached? Why does this symptom not occur when using standard graphql setups?

Possible workarounds not requiring changes to this project:

  • Maybe a serverless plugin to force clearing out graphql related modules and reload.
  • A wrapper around serverless-offline that actually kills the entire process and restarts it

These solutions are not ideal.

@MichalLytek
Copy link
Owner

MichalLytek commented Oct 5, 2018

It seems wrong that two different schemas should be sharing definitions anyway. Why are types globally cached? Why does this symptom not occur when using standard graphql setups?

Because standard solutions generate schema programmatically, using graphql-js or SDL file and graphql-tools. But TypeGraphQL uses decorators to collect the metadata and then build the schema from them.

Unfortunately, duplicated types issue comes from the way how decorators are evaluated and how node's require is doing imports. It could be avoided only by forcing you to register in buildSchema every single object type, arg type, input type, interface type, enum type and union type, not only the resolvers like it's done now.

And building schema on every single lambda call is a horrible idea. It's very CPU time consuming job and has to be avoided, that's why you should cache the built schema between lambda function calls.

Now, I don't know at this point how the internals work, but my instinct is that there should be a means of re-compiling the schema without killing the process, by clearing out any types wherever they are cached.

You can call MetadataStorage.clear() but I'm not sure if every module file will be evaluated again. You would need to do this before other imports (requires).

when one makes changes to their schema, offline will re-compile and reload the typescript, but because of the workaround in point 1 above, the schema will not be re-built. This requires one to manually kill the offline process and restart it.

Is there any recompile hook available, like module.hot for webpack's HMR? You should try to use it with metadata clearing like above.

@MichalLytek MichalLytek added the Question ❔ Not future request, proposal or bug issue label Oct 5, 2018
@naturalethic
Copy link
Author

And building schema on every single lambda call is a horrible idea. It's very CPU time consuming job and has to be avoided, that's why you should cache the built schema between lambda function calls.

I don't think this is the case. My guess is that there are several threads running in the same environment, each importing the lambda function -- so the module level stuff is run multiple times, once for each 'server' node. The lambda function itself does not build the schema.

Thanks for the other suggestions, I'll give them a try.

@MichalLytek
Copy link
Owner

The easiest way to disable "caching" of metadata from decorators is to clear global.TypeGraphQLMetadataStorage variable before importing types/building schame. That global property is used in getMetadataStorage() function. It was introduced to allow lib's consumers to create their types in other npm module/package and then git/npm install it in their project - sharing storage between different node_modules instances using global was the only solution I could find.

@Zn4rK
Copy link

Zn4rK commented Oct 7, 2018

Simply using the flag --useSeparateProcesses from serverless-offline worked for me. Also seems to solve a similar issue I was having with TypeORM.

Edit: That flag adds a bit of overhead, so we might want to find a real solution for this anyway.

@MichalLytek
Copy link
Owner

I don't get it...
The problem with aws lambda is that it uses single process for every request, so it reevaluates decorators on every request, that's why building schema fails. So to make it work, we should store built schema in global scope.
But for some reason it doesn't work for you, so you just fallback to separate process to behave like a normal node.js app that will create the schema on every request? 😕

@Zn4rK
Copy link

Zn4rK commented Oct 8, 2018

The problem with aws lambda is that it uses single process for every request, so it reevaluates decorators on every request, that's why building schema fails. So to make it work, we should store built schema in global scope.

I don't belive that the problem is on AWS lambdas. It's mainly on local development.

serverless-offline behaves a bit differently locally than what it does on AWS. The only time code outside of the handler (example) is run on AWS is on cold starts (if I understand it correctly).

When using serverless-offline locally you get a dev server that closely mimics the lifecycle on AWS. The biggest difference is that on every request it re-evaluates everything that is in that handler - and that's why we get that error.

But for some reason it doesn't work for you, so you just fallback to separate process to behave like a normal node.js app that will create the schema on every request? 😕

The documentation on serverless-offline is quite lacking regarding the different options that are available. After looking at the source code with that flag it's more clear as to what is actually happening - it does what you describe.

Clearing the metadata if we're on a local server (kinda) works:

import { getMetadataStorage } from "type-graphql/metadata/getMetadataStorage";

if (process.env.IS_OFFLINE) {
  getMetadataStorage().clear();
}

Another solution might be to turn off cache invalidation on serverless-offline and use HMR or nodemon to reload the affected files.

@MichalLytek
Copy link
Owner

I will split this issue into two things. One is to provide a lambda example (covered by #52) and if it requires too much configuration, I will try to provide some helpers, maybe even as a plugin @typegraphql/lambda.

So closing this for housekeeping purposes 🔒

@MichalLytek MichalLytek added the Solved ✔️ The issue has been solved label Oct 22, 2018
@joshstrange
Copy link

@Zn4rK Did you ever find a good solution? I'm running into the same issue with Serverless-offline+nestjs+type-graphql.

Also where did you put your getMetadataStorage().clear(); code? I've tried a few obvious places in my code but I just get Error: Cannot determine GraphQL output type for $ENTITY_NAME$ which I assume is because I'm killing the storage after it's loaded but before it's used instead of just on reload/request.

@Zn4rK
Copy link

Zn4rK commented Jul 21, 2019

@joshstrange What I eventually settled on was the banner plugin for Webpack (I use serverless-webpack which I can highly recommend):

if (slsw.lib.webpack.isLocal) {
  plugins.push(
    /**
     * This is due to the fact the both TypeORM and TypeGraphQL is using a global variable for storage.
     * This is only needed in development.
     *
     * When the module that's been hot reloaded is requested, the decorators are executed again, and we get
     * new entries.
     *
     * @see https://github.com/typeorm/typeorm/blob/ba1798f29d5adca941cf9b70d8874f84efd44595/src/index.ts#L176-L180
     * @see https://github.com/MichalLytek/type-graphql/blob/1eb65b44ca70df1b253e45ee6081bf5838ebba37/src/metadata/getMetadataStorage.ts#L5
     */
    new webpack.BannerPlugin({
      entryOnly: true,
      banner: `
        delete global.TypeGraphQLMetadataStorage;
        delete global.typeormMetadataArgsStorage;
      `,
      raw: true,
    }),
  );
}

If you choose to go the getMetadataStorage().clear(); route, it should be the first thing in your handler.

@joshstrange
Copy link

joshstrange commented Jul 26, 2019

@Zn4rK I'm sorry to bug you again and we can (if you are ok with it) move to email if GH is not the best place for this discussion (josh at joshstrange dot com). Is there anyway I could get a little more information on how you have your project setup? Like your serverless.yml file, webpack.config.js, tsconfig.js? I know it's a huge ask and if you don't want to that is fine, I just figured I'd ask.

I've been futzing with this for days now and I can't seem to get serverless-offline, serverless-webpack, nestjs, typeorm, type-graphql to all play nicely. I feel like I'm SO close as I've got as far as everything except type-graphql playing nice in offline and on AWS (unoptimized and without using webpack, using serverless-typescript). I know it's asking a lot but I really want my environment setup for local dev (that also works on lambda) so I can leverage graphql through nestjs but also have custom endpoints if I need them. I'm about to throw in the towel and drop nestjs and try for something simpler but that would be re-inventing the wheel of a bunch of stuff...

Either way thank you for what you've already done trying to help me!

Edit: If you aren't using NestJS or you are using some other framework I would love to know even that much so I could attempt to go down that path.

@carlosdubus
Copy link

I don't think you need anything from this library. For local testing, use normal apollo (don't use serverless-offline), for lambda use apollo-lambda. To achieve this, use a different entry point for local or an environment variable.

This works on AWS lambda as today:

import "reflect-metadata";
const { ApolloServer } = require('apollo-server-lambda');
import { buildSchema } from "type-graphql";
import { resolvers } from "...";

const globalSchema = buildSchema({
    resolvers
});

async function getServer() {
    const schema = await globalSchema;
    return new ApolloServer({
        schema
    });
}

export function handler(event: any, ctx: any, callback: any) {
    getServer()
        .then(server => server.createHandler())
        .then(handler => handler(event, ctx, callback))
}

@omar-dulaimi
Copy link

I've been struggling with serverless-offline for the past couple of hours. Should we give up on it?

@mateo2181
Copy link

@carlosdubus can you share your webpack.config.js? I'm trying to deploy in AWS my Graphql API using serverless, apollo-server-lambda, type-graphql but when I try to use the playground I got the error "message": "Internal server error".

@RishikeshDarandale
Copy link

@MichalLytek , Do you have an example with Apollo server 4 and type-graphql@next ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Question ❔ Not future request, proposal or bug issue Solved ✔️ The issue has been solved
Projects
None yet
Development

No branches or pull requests

8 participants