statement:
This article is transferred from the DEV Community website, and the article translation is provided by the developer community;
Click the link below to view the original English text:
On January 5, 2022, AWS Senior Solutions Architect Sara Gerion announced that Lambda Powertools TypeScript has entered public beta. Lambda Powertools is an open source project sponsored by AWS to optimize the developer experience and adopt best practices when using AWS Lambda. Lambda Powertools TypeScript joins the Java and Python Lambda Powertools libraries.
content
- Node.js Tooling for Lambda
- Decorators
- No Decorators
- Features
- Tracer
- Logger
- Metrics
- Package Size
- Summarize
Node.js Tooling for Lambda
I'm a big fan of TypeScript, and I actually co-authored a book on TypeScript. I don't use Java or Python very often, so while I'm interested in Lambda Powertools, I haven't started trying it until now. Lambda Powertools TypeScript, middy, and DAZN Lambda Powertools are all Lambda tools that run Node.js. Two differences between Lambda Powertools TypeScript and similar libraries are that the former is sponsored by AWS and supports decorators.
Lambda Powertools TypeScript supports both JavaScript AWS SDK v2 and v3 versions, and examples for both versions are given.
Decorators
Views on decorators vary, but I think they are useful abstractions and are used a lot in object-oriented TypeScript. However, there is a considerable constraint in TypeScript that decorators can be placed on class methods, but not functions. This means that code like this is not currently possible.
This is a shame because it provides a great developer experience. In order to add the doSomethingGreat decorator to our handler, we need to write some code like below.
That's five extra lines of code, maybe not a big deal. Anyway, there's not much the Powertools team can do, as it may be a few years before functions support decorators. Remember, if we choose to use decorators and classes in our Lambda, we need to refer to this file carefully.
(If deno is not used) decorator and TypeScript are not supported in Lambda, so if we want to run decorator and TypeScript, we also need a transpilation step. Fortunately, for users of AWS CDK, AWS SAM, and serverless frameworks, this problem has been largely resolved. If you plan or need to roll your own decorator and TypeScript, esbuild is a good place to start and seems to be the bundler of choice.
Don't use Decorators
We might as well choose not to use classes or transcoders. Lambda Powertools TypeScript can be used without decorators, and indeed without TypeScript. We can use this library with vanilla Node.js.
The Lambda Powertools TypeScript documentation is very good with several examples and there are many more on GitHub .
Features
All three versions of Lambda Powertools have Metrics, Logger, and Tracer as core tools. Lambda Powertools Python includes an Event Handler and several other useful tools to support batching, invalidation, validation, and more.
Each version of Lambda Powertools is developed independently, and its functionality is customized for different needs at different runtimes. The difference is that AWS CDK uses jsii to publish the same structure to multiple runtimes. While it can be frustrating to wait for some functionality, it's the right thing to do, as it only adds complexity if you compile generic code into custom code that can decorate multiple runtimes.
For testing, I implemented all three utilities of Lambda Powertools TypeScript in a sample project. I chose the CDK Async Testing project because it includes several Lambda functions, as well as asynchronous workflows via EventBridge and Step Functions.
You can click here to see my instrument code.
Tracer
In order to instrument my functions through Tracer, I need to rewrite them as classes. I chose to use a decorator because I haven't decided if I want to use classes at all, so I need to fumble around. I started with collect.ts and here are the initial 11 lines of code for the function.
Below is the refactoring to include Tracer.
The function is now 25 lines of code, which is not terrible. After adding the Metrics and Logger tools, it eventually becomes 33 lines. Of course my function will be bigger because it has more power. We should think of this addition as an addition, not a multiplication. My 11 lines of code function became 25 lines. If it originally had 111 lines, it would grow to 125 lines instead of doubling.
So what's the result? The Tracer module wraps the AWS X-Ray SDK (as a horizontal dependency). It doesn't add any new features, it just makes the SDK easier to use. In my experience, this SDK is a bit cumbersome to use, so it's well worth it. We can decorate class methods to introduce new trace segments in one line of code. We can also add new traces where we see fit using the form of commands. We could capture the AWS client, but that would expose the X-Ray SDK.
One thing that didn't work out was the weirdness when using the X-Ray SDK and DynamoDB DocumentClient. When using DocumentClient and X-Ray, we need to work around a bit because the SDK needs to access the DocumentClient service properties, but this is not reflected in the type. Here is how I solved this problem.
Update! This issue has been resolved in version 0.5.0! The above code can now be written like this:
Thank you so much for this quick improvement! Now my DX is better and here is my application.
original text
Not only did I get really great service graphs, but also detailed traces.
The Tracer module is adding the ##index.handler section on these screenshots. I would like to add more traces to make better use of this tool. Overall, getting these detailed traces of the app through all its features and services is pretty impressive and very useful. Most of the work is done by X-Ray, but a better developer experience means we have to instrument more applications, which is certainly a good thing.
What I'm trying to say here is that traces also include logs, and the logs for the instrument segment are in the traces in CloudWatch.
This is also great. I can write application code in a distributed fashion, using single-purpose functions, but be able to take control of the big picture when the program is executed.
As long as we set a sampling rate on high-throughput applications, the price of X-Ray services will be low. Pricing seems to be based on tracking, so adding extra snippets to tracking doesn't add cost.
Logger
The Logger tool is a drop-in replacement for any logger, including the console. The added value of Logger is the ability to inject Lambda context into all log messages. When we annotate the handler method with @logger.injectLambdaContext() and then use logger.info, we see log messages like this:
This is really handy if we plan to ingest logs into a search index, or if we just want to use CloudWatch Logs Insights, as the structure helps us search and filter log information. On the other hand, if we are only looking at a small amount of log information, it can be a bit cumbersome. We should keep in mind that any logging service (including CloudWatch) is metered, and extremely verbose logs can be expensive.
With that in mind, the Logger tool has a lot of nice features and we can build logs however we want. Additionally, Logger includes a sample rate feature to reduce costs.
By default, logger methods require one or more parameters. The first argument is a string or an object with a message key. I found that if I pass a string as a follow-up parameter, the string is converted to a character array and printed, so this is something to be aware of.
Metrics
The purpose of the Metrics tool is to publish custom CloudWatch metrics. Although Lambda automatically publishes some useful metrics, such as latency, concurrent execution, throttling, etc., custom metrics can add relevant business events to the metrics for observability.
Tracking reliability is important, but it's not the whole story! Custom metrics should be the most important metrics. How many customers signed up this week? How many of them are able to complete valuable workflows? The answers to these questions are in the code, and if we emit custom metrics, they will also appear in our dashboards.
The pricing structure for custom metrics can be quite expensive. Embedded metrics formats help manage costs and are supported by Lambda Powertools TypeScript. The documentation is also clear and doesn't require me to elaborate. Let's take a look at the actual experience. I added a custom metric of "collectionSuccess" to the collectionSuccess function. In my hypothetical application, a partial payment is collected, and here I mark whether the collection can be paid.
Adding @metrics.logMetrics will cause any metrics we emit to be logged to CloudWatch. (Considering the cost) we may or may not want this. To add a custom metric, we simply use metrics.addMetric.
I have instrumented my app to emit cold start metrics, as well as other custom metrics that describe important events in the app, such as successful/failed payments/collections. Since the focus of my app is to showcase integration tests, I also set up custom metrics to show how long the tests are running.
These metrics can be found in CloudWatch Metrics, in dashboards, or exported to third-party tools via the API.
Package Size
The added capacity of all tools added to the project is 600kb uncompressed and 200kb compressed. This seems reasonable given its value and the need to link some dependencies into the AWS SDK or X-Ray SDK, and the team has done a good job of implementing its lean tenets.
Summarize
Lambda Powertools focuses on the tools developers really need to optimize applications and follow best practices. The core modules are all focused on observability, which is a must and appreciated. The API developed by the team is attractive to developers who use decorators as well as developers who don't.
I hope this library can be popularized as soon as possible, I will follow the roadmap , follow up the follow-up progress, and participate.
Article by Matt Morgan
Matt Morgan for AWS Community Builders
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。