Skip to main content

Postgresql partituoning (few simple steps)

History

I created a simple application for logistic purpose based on Django, PostgresSQL, and Postgis.  This is a service receiving a position of a truck and place it on the map. Service is really simple and uses a virtual machine with small performance.  The main problem of the current solution is a small performance of this cloud server.  Most of the cloud providers use LAN for attaching virtual server's drives. As result speed of disk I/O is really small.  


root@sergiy:/home/serg# fdisk -l 
Disk /dev/xvda1: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvda2: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@sergiy:/home/serg# hdparm -tT /dev/xvda1

/dev/xvda1:
 Timing cached reads:   18876 MB in  1.99 seconds = 9499.04 MB/sec
 Timing buffered disk reads:  98 MB in  3.11 seconds =  31.55 MB/sec
root@sergiy:/home/serg# hdparm -tT /dev/xvda1

/dev/xvda1:
 Timing cached reads:   19510 MB in  1.99 seconds = 9805.60 MB/sec
 Timing buffered disk reads:  58 MB in  3.05 seconds =  19.04 MB/sec


I'm sure that drive has a good speed but network's bottleneck makes this 19.04MB/sec.  The simplest way to solving this is preparing real server or add more memory for this virtual server.  But I'm not ready to make this money spree!
My application adds records to databases on time base. Records are never updated and it is really rare event adding records related to the past. Sometimes user makes SELECT based on time range related to the business period (day, week, month).  In this case, Postgres table partitioning is a good way to do.

Database analyzing and preparing. 

Big table with my data is named as car_points and has next structure:


geocar=# \d car_points
                                    Table "public.car_points"
   Column    |           Type           |                        Modifiers                        
-------------+--------------------------+---------------------------------------------------------
 id          | integer                  | not null default nextval('car_points_id_seq'::regclass)
 Speed       | integer                  | not null
 CreatedTime | timestamp with time zone | not null
 Course      | integer                  | not null
 Altitude    | integer                  | not null
 Sat         | integer                  | not null
 Point       | geometry(Point,4326)     | not null
 Car_id      | integer                  | not null
 SensorData  | jsonb                    | not null
Indexes:
    "car_points_pkey" PRIMARY KEY, btree (id)
    "car_points_0d7f622d" btree ("Car_id")
    "car_points_Point_id" gist ("Point")
    "idx_time" btree ("CreatedTime")

Column 'id' is required by django.  I dont know how to remove it. 'CreatedTime' is the column which is used as partitioning criteria. 'Point' column is part of PostGIS functionality (This made some headache for me).
   General information about partitioning is here.  Postgres information about partitioning.  I've decided to use inherit feature for my table.  Also, it is good to have a special trigger for creating the new partition.
First and really important! Make backup!

I hope you are doing the backup on regular basis. I'm sure that better to have two no needed backup that no backup in the critical moment.
Create a function for insert record in the correct table:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
CREATE OR REPLACE FUNCTION createpar() RETURNS trigger
AS
$BODY$
DECLARE
   partition_date TEXT; 
   partition TEXT;
 --  point TEXT;
BEGIN
   -- 
   partition_date := to_char(NEW."CreatedTime", 'YYYY_WW');
   partition := TG_TABLE_NAME || '_' ||partition_date;
   IF NOT EXISTS(SELECT relname FROM pg_class WHERE relname=partition)
   THEN
    RAISE NOTICE 'Creating partition: %',partition;
    -- Should add correct week check 
    EXECUTE 'CREATE TABLE ' || partition || ' (check (to_char("CreatedTime", ''YYYY_WW'') = ''' || partition_date || ''')) INHERITS (' || TG_TABLE_NAME || ');';
   END IF;
   EXECUTE 'INSERT INTO ' || partition || '(id, "Speed", "CreatedTime", "Course", "Altitude", "Sat", "Point", "Car_id", "SensorData" ) VALUES (' ||
   NEW."id" ||','||
   NEW."Speed" || ',''' ||
   NEW."CreatedTime"|| ''',' ||
   NEW."Course"||','||
   NEW."Altitude" || ','||
   NEW."Sat" || ',' ||   
    ' ''' || NEW."Point" || '''::geometry' ||','||
   NEW."Car_id" || ','''  ||
   NEW."SensorData" ||
''') RETURNING id;';
--   INSERT INTO partition  VALUES(NEW.*);
   RETURN NULL;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;

Source code of this function is here: postgrsfunction
I would like to clarify some code:
String #9-10  generates name for our partition  using next format : table name plus year and week number. If no partition for new record  then partition is created #16.  Function calculate year and week in to string and check if partition with  this name is presen.
After this our record is inserted in our new partioned table.  Unfortunately EXECUTE  can not insert postgis geometry corectly. (NEW.* has geometry column , but INSERT requires geometry as string with geometry tailing tag).  If you dont have PostGIS column you can use NEW.* directly (string#29)

After this I've add triger for every new  inserted record using this function.


geocar=#                                 
CREATE TRIGGER createpartitiontrigger BEFORE INSERT on car_points FOR EACH ROW EXECUTE PROCEDURE  createpar();
Few minutes later new partition is created:

geocar=# \d
                           List of relations
  Schema  |               Name                |   Type   |    Owner    
----------+-----------------------------------+----------+-------------
 public   | car_car                           | table    | cartracking
 public   | car_car_id_seq                    | sequence | cartracking
 public   | car_points                        | table    | cartracking
 public   | car_points_2017_43                | table    | cartracking
 public   | car_points_id_seq                 | sequence | cartracking
  Time to make simple analize:

geocar=# EXPLAIN ANALYZE select  count(*)  from car_points_2017_43  where "CreatedTime" > '2017-10-26';
                                                         QUERY PLAN                                                          
-----------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=882.14..882.15 rows=1 width=8) (actual time=89.705..89.706 rows=1 loops=1)
   ->  Seq Scan on car_points_2017_43  (cost=0.00..803.62 rows=31407 width=0) (actual time=7.821..58.318 rows=29811 loops=1)
         Filter: ("CreatedTime" > '2017-10-26 00:00:00+03'::timestamp with time zone)
 Planning time: 0.139 ms
 Execution time: 89.768 ms
(5 rows)

geocar=# EXPLAIN ANALYZE select  count(*)  from car_points  where "CreatedTime" > '2017-10-26';
                                                                     QUERY PLAN                                                                     
----------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=13748.21..13748.22 rows=1 width=8) (actual time=655.247..655.249 rows=1 loops=1)
   ->  Append  (cost=0.43..13656.98 rows=36490 width=0) (actual time=24.448..589.689 rows=68910 loops=1)
         ->  Index Only Scan using idx_time on car_points  (cost=0.43..12853.36 rows=5083 width=0) (actual time=24.446..347.378 rows=39099 loops=1)
               Index Cond: ("CreatedTime" > '2017-10-26 00:00:00+03'::timestamp with time zone)
               Heap Fetches: 39099
         ->  Seq Scan on car_points_2017_43  (cost=0.00..803.62 rows=31407 width=0) (actual time=0.040..116.462 rows=29811 loops=1)
               Filter: ("CreatedTime" > '2017-10-26 00:00:00+03'::timestamp with time zone)
 Planning time: 0.309 ms
 Execution time: 655.309 ms
(9 rows)

SELECT uses our new partition  for data  but index from parent table.  In this case It will be good create index for our table and move data from parent table to partituoned table.  Of course this requires data dumping and reloading but does not requires stopping database as MySQL !


I've used idea from StreamBright: https://medium.com/@StreamBright/creating-partitions-automatically-in-postgresql-7006d68c0fbb

Comments

Popular posts from this blog

Upgrade postgres 9.4 to 9.6 at Debian linux

I'm using virtual server based on Debian Linux for some my free projects.  One of them is based on Postgres with PostGIS extension.   This windy rainy day is a good time for executing some upgrade. Also I would like to add partman extension for partituoning.
Checking new version. Update package list :
aptitude update Get: 1 http://ftp.debian.org/debian jessie-updates InRelease [145 kB] Get: 2 http://security.debian.org stretch/updates InRelease [62.9 kB] Get: 3 http://security.debian.org jessie/updates InRelease [63.1 kB]
 Check for new version:


dpkg-query -l postgresql* Desired=Unknown/Install/Remove/Purge/Hold| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)||/ Name Version Ar…

Small script for one of the plarium game

few notes about google games.
Goggle informed that access to the all games be closed after June, 30. I played "Pirates: Tides of fortune" and decided to limit my game time using small script. Detail is below.
Some history
First internet games were updated network games only. 15 years ago internet connection requires a lot of money and nobody had a problem with lagging of the other players. Usually games uses personal communication protocol. Warbirds, Aces High and Eve online use this way.
Next part use a browser and trivial HTTP protocol. One of the popular game in this area is travian . Travian player uses browser and every browser (PC, cell phone, tablet) can be used for playing. Of course, popularity of the game is related to graphics. Trivial HTTP does not have good power in this and other technology is used for this side. One of them is Flash . Unfortunately flash requires a lot of CPU : "Pirates" was near to froze at my old celeron with 512M…

Small python script for monitoring MySQL performance

I have few services which use MySQL as database server. I would like to have information about load in PNG  image  or  in Cacti app.
MySQL   has  performance information at 'SHOW STATUS' command.

Values  which  are monitored : 
 threads_running, threads_connected, thread_cached, slow_queries
 Of course,  it is really easy to add more variables.

Connection to MySQL is accomplished by MySQLdb  module. Typical example of usage is below :
import MySQLdb mydb = MySQLdb.connect(host = 'hostname', user = 'username', password = 'secret', database = 'mysatabase' ) mycursor = mydb.cursor() mycursor.execute('SQL command') sqlresult = cur.fetchall()
Storing data in rrd file is aviable via rrdtools package. This one is present in debian and Centos OS. example of creating file is below: import rrdtool rrdtool.create("myfile.rrd" , "DS:value1:datatype:hea…