Reference Guide
This module provides the full interface of the standard logging
library in Python. This module provides all the
standard functionality, so it can be used instead of the standard library.
It also provides additional logging functionality that can be used in Envase applications and services to generate consistent logging.
- class LogExecutionTime(operation_name, level=10, _test_time_func=None)
Decorator class that logs the start and end of the decorated function including the execution time that it took.
This class can be used with any function and method, but it should always decorate entry points for services and function; therefore, lambda handler functions and handling routes should be decorated with this function.
- Parameters
operation_name (str) – Preferrably unique identifier for the decorated function. The function name should be used assuming is unique; otherwise, it can be prefixed with the module name.
log_level (int) – Log level for the logging output. By default is
DEBUG
, which is preferrable in most instances.
import en_logging @en_logging.LogExecutionTime('get_resource', en_logging.INFO) def get_resource(): return Handler().execute()
- __call__(func)
Call self as a function.
- critical_check(condition, message, *args, **kwargs)
Logs critical message and data if the condition fails.
- Parameters
condition (expression) – Conditional expression to evaluate.
message (str) – Message to log if the condition fails.
- Params args
Additional data to log.
- Params kwargs
Additional keyword arguments to log.
import en_logging as log threads = get_available_thread_count() log.critical_check(threads > 0, 'Running out of threads')
The critical log message is only logged if the condition is false.
- debug_check(condition, message, *args, **kwargs)
Logs debug message and data if the condition fails.
- Parameters
condition (expression) – Conditional expression to evaluate.
message (str) – Message to log if the condition fails.
- Params args
Additional data to log.
- Params kwargs
Additional keyword arguments to log.
import en_logging as log customers = retrieve_customers() log.debug_check(customers, 'No customers were found')
The error log message is only logged if the condition is false.
- error_check(condition, message, *args, **kwargs)
Logs error message and data if the condition fails.
- Parameters
condition (expression) – Conditional expression to evaluate.
message (str) – Message to log if the condition fails.
- Params args
Additional data to log.
- Params kwargs
Additional keyword arguments to log.
import en_logging as log valid = validate(param) log.error_check(valid, 'The parameter is invalid')
The error log message is only logged if the condition is false.
- info_check(condition, message, *args, **kwargs)
Logs info message and data if the condition fails.
- Parameters
condition (expression) – Conditional expression to evaluate.
message (str) – Message to log if the condition fails.
- Params args
Additional data to log.
- Params kwargs
Additional keyword arguments to log.
import en_logging as log available = check_feature_availability(feature) log.info_check(available, f'The feature {feature} is not available')
The info log message is only logged if the condition is false.
- log_lambda_event(event, level=10)
Logs a lambda event object.
- Parameters
event (event) – Event object to be logged.
level (int) – Log level for which to log the event. By default, it uses DEBUG.
- log_raw_json(raw_json, prefix='', indent=None, level=10)
Logs Python dictionaries and arrays that can be serialized to JSON.
- Parameters
raw_json (json) – Object to log.
prefix (str) – Prefix to the output. If not specified, it uses the string
JSON
.indent (int) – Number of spaces to indent the resulting string. By default, there is no indentation, which is the preferrable case when logs will endup in AWS CloudWatch because AWS will format the JSON correctly.
level (int) – Log level used to log the JSON object. By default, it uses DEBUG.
- log_response(response)
Logs a http response status and body. This function evaluates the status code to determine the log level used. Successfull responses are logged as
INFO
where client errors are logged asERROR
and server errors asWARNING
.- Parameters
response (response) – The response object to log.
- warning_check(condition, message, *args, **kwargs)
Logs warning message and data if the condition fails.
- Parameters
condition (expression) – Conditional expression to evaluate.
message (str) – Message to log if the condition fails.
- Params args
Additional data to log.
- Params kwargs
Additional keyword arguments to log.
import en_logging as log status = verify_available(service) log.warning_check(status=='valid', 'The service status is invalid')
The warning log message is only logged if the condition is false.
Application Level Logging
The modules in this package provide log initialization functionality depending on the type of application or service being implemented. Each Envase application or service can import the right module depending on whether the application is running on a machine through Flask, or deployed to the cloud with Zappa, or is a Chalice application.
Chalice Based Applications
en_logging.application.chalice
Insures logging is initialized properly for Chalice applications. This module should be imported as the first thing in
the app.py
for the Chalice application:
# in app.py
#
import en_logging.application.chalice
import chalice
# other imports
# routes and function handlers.
Cloudwatch Logging Applications
en_logging.application.cloudwatch
Insures logging will be pushed to AWS cloudwatch. IAM permissions are necessary to push the logs and are described in Boto3 Credentials documentation. This module should be imported as the first thing in applications.
# in app.py
import en_serialization.application.cloudwatch
# Other imports and application initalization logic.
- configure(level=30, format='%(levelname)s - %(message)s', log_group=None, base_stream_name=None, send_interval=None, filters=None)
To skip sending log information you can add “SKIP_CLOUDWATCH_LOGGING” to the environment. This is sometimes necessary when running tests locally or in CICD.
Flask Based Applications
This module should be imported as the first thing in applications or services running through Flask that are installed to a machine.
# in app.py
import en_serialization.application.flask
# Other imports and application initalization logic.
Zappa Deployed Flask Applications
This module should be imported by applications running serverless that are deployed through Zappa. The module should be imported in the main application script.
# in app.py
import en_serialization.application.zappa
# other imports and application initialization.
Logging Configuration Utilities
This module contains functionality to help with logging configuration.
- level_name_to_value(level_name)
Converts a log level name to its value, so that it can be used with the standard logging module.
- Parameters
level_name (str) – Name of the level.
- Returns
The converted level.
Specialized Logging
This package provides specialized utilities for logging. These are reusable logging functions that serve one specific purpose and may need to do some manipulation of objects so they are logged correctly.
AWS SQS Logging Utilities
en_logging.Logging.specialized.sqs
This module provides utilities that allow to log event objects in AWS Lambda functions triggered by SQS.
- log_lambda_event_from_sqs(sqs_event, level=10, json_logging_func=None)
This function formats and logs the SQS Event payload that is sent by an SQS trigger when invoking a lambda function. The function insures that the event metadata and payload are serialized consistently.
- Parameters
sqs_event (dict) – The event that triggered the function through SQS.
level (int) – The log level to use to log the event. By default, it uses DEBUG.