Thursday, 24 August 2017

Database Logging


Why is logging required ?
In an enterprise application, data that gets logged is very very important not only for troubleshooting purposes but also for auditing and other stuff. Depending on what information gets logged, we can use this data for intelligent analytics systems to come up with predictive analytical models. For example, in an e-commerce landscape we can establish customer buying patterns based on the transaction history logs for its customers. Nowadays, with the advent of tools like apache spark, flume we can process huge chunks of log data in matter of milliseconds. This post is about how to use the logback framework in order to store the application log data in a relational database such as postgre.
With loggers we have something called as an Appender.


So, what are Appenders ?
Well, an appender is the one responsible  for taking a log request, building the log message in a suitable format and sending it to the output device.
Logging can be done to either of the 3 places mentioned below:
  1. Console
  2. File
  3. Database
So when we need to log to the console we have a ConsoleAppender. For file based logging we use FileAppenders and for database logging we have the DatabaseAppender.
Database logging... ? For what ? Well you log to files, you want better persistence than you log to database. Actually there are applications where the logs are of a high priority and warrant being placed in database.


So now that we know why is logging important and an overview about appenders, let’s get started…


How do we log to the database?


In order to use logback framework, you need to include the following maven dependencies in your project object model(POM) file.
  • logback-classic-1.0.13.jar
  • logback-core-1.0.13.jar
  • slf4j-api-1.7.5.jar
You can either download the above jar file and add the same or else add the dependencies in the POM file as show below.
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.0.13</version>
</dependency>


<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.0.13</version>
</dependency>


<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.5</version>
</dependency>
Now, you are ready to use logback api-s.


The next step was to create logback.xml file.

Script for logback.xml


<?xml version="1.0" encoding="UTF-8"?>
<configuration>
   <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
       <!-- encoders are assigned the type ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
       <encoder>
           <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} - %msg%n
           </pattern>
       </encoder>
   </appender>
   <appender name="db" class="ch.qos.logback.classic.db.DBAppender">
       <connectionSource
           class="ch.qos.logback.core.db.DriverManagerConnectionSource">
           <driverClass>org.postgresql.Driver</driverClass>
           <url>jdbc:postgresql://localhost:5432/simple</url>
           <user>postgres</user>
           <password>root</password> <!-- no password -->
       </connectionSource>
   </appender>


   <!-- the level of the root level is set to DEBUG by default. -->
   <root level="TRACE">
       <appender-ref ref="stdout" />
       <appender-ref ref="db" />
   </root>
</configuration>

Above, I have created  a console appender and a database appender.
The database appender here requires the JDBC driver, the jdbc url and the db credentials. An additional property is the connectionSource which is actually the type of Connection wrapper that we would like to use. Logback provides a few options here and I went with the DriverManagerConnectionSource class.


Now lets test the above by writing a class to test the code:


Code Snippet:


public class TestDbAppender {


  private static final Logger logger = LoggerFactory.getLogger(TestDbAppender.class);


  public TestDbAppender() {
     logger.info("Class instance created at {}",
               DateFormat.getInstance().format(new Date()));
 }


  public void doTask() {
     logger.trace("In doTask");
     logger.trace("doTask complete");
  }


  public static void main(String[] args) {
     logger.warn("Running code...");
     new TestDbAppender().doTask();
     logger.debug("Code execution complete.");
  }


}

Will doing just the above work out? Where would the logs go ? We’ve connected it to a database but where in the table would all the logs go?
I looked into the documentation for Logback and according to the documentation:
"The DBAppender inserts logging events into three database tables in a format
independent of the Java programming language. These three tables are:
  • Logging_event
  • Logging_event_property
  • Logging_event_exception


They must exist before DBAppender can be used."

Scripts for the tables:


--Script for Logging_event--


--TABLE: logging_event


--DROP TABLE logging_event


CREATE TABLE logging_event
(
 timestmp bigint NOT NULL,
 formatted_message text NOT NULL,
 logger_name character varying(254) NOT NULL,
 level_string character varying(254) NOT NULL,
 thread_name character varying(254),
 reference_flag smallint,
 arg0 character varying(254),
 arg1 character varying(254),
 arg2 character varying(254),
 arg3 character varying(254),
 caller_filename character varying(254) NOT NULL,
 caller_class character varying(254) NOT NULL,
 caller_method character varying(254) NOT NULL,
 caller_line character(4),
 event_id bigint NOT NULL DEFAULT nextval('logging_event_id_seq'::regclass),
 CONSTRAINT logging_event_pkey PRIMARY KEY (event_id)
 )
 WITH (
 OIDS=FALSE
 );
 ALTER TABLE logging_event
   OWNER TO postgres;


--Script for  logging_event_property--


--TABLE: logging_event_property


--DROP TABLE logging_event_property


CREATE TABLE logging_event_property
(
event_id bigint NOT NULL,
mapped_key character varying NOT NULL,
mapped_value character varying(1024),
CONSTRAINT logging_event_property_pkey PRIMARY KEY(event_id, mapped_key),
CONSTRAINT logging_event_property_event_id_fkey FOREIGN KEY (event_id)
 REFERENCES logging_event (event_id) MATCH SIMPLE
 ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
OIDS=FALSE
);
ALTER TABLE logging_event_property
 OWNER TO postgres;


--Script for  logging_event_exception--


--TABLE: logging_event_exception


--DROP TABLE logging_event_exception


CREATE TABLE logging_event_exception
(
event_id bigint NOT NULL,
i smallint NOT NULL,
trace_line character varying(254) NOT NULL,
CONSTRAINT logging_event_exception_pkey PRIMARY KEY(event_id, i),
CONSTRAINT logging_event_exception_event_id_fkey FOREIGN KEY (event_id)
 REFERENCES logging_event (event_id) MATCH SIMPLE
 ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
OIDS=FALSE
);
ALTER TABLE logging_event_exception
 OWNER TO postgres;


Now run the main code. This will create entries in the above 3 tables.
  1. Every log call is treated as a log event. The entry for the same is added in the logging_event table. This entry includes the timestamp, actual log message, the logger name etc among other fields.
  2. If the logger under consideration has any Mapped Diagnostic Context or MDC or context specific values associated than the same will be placed in the logging_event_property  table. Too many values here will definitely slow down the log process.
  3. Messages of type error often have stack traces associated with it. These are logged separately in the logging_event_exception  table with each trace being treated as a separate record.


Hope the above post helped you.


1 comment:

Amazon EKS - Kubernetes on AWS

By Komal Devgaonkar Amazon Elastic Container Service for Kubernetes (Amazon EKS), which is highly available and scalable AWS service....