Skip to end of metadata
Go to start of metadata

Overview


Before operating on a project with Altibase, the configuration of the server must be determined. This document explains how to plan the CPU, memory, disk, and network capacity required for a server configuration.

This document was prepared based on the version below.

  • Altibase 5.5.1 or later
Icon

For errors and improvements related to this document, please contact the technical support portal or technical support center.

CPU capacity planning


CPU capacity planning is generally performed based on TPC-C. This section describes the necessary items and factors to be applied to Altibase and provides examples of capacity planning.

TPC-C


Visit http://www.tpc.org and there is a measurement result called tmpC provided for each device. This is a number that is measured based on the throughput of a specific transaction (NEW_Order) in an environment where various transactions occur, and CPU capacity is calculated based on this.

This document also assumes that the CPU capacity is calculated based on this. For more detailed information on TPC-C, please visit the http://www.tpc.org/ website.

CPU capacity planning criteria


If it is difficult to build a TPC-C execution environment, the CPU capacity can be calculated instead of the TPC-C result by applying the correction below.

Applying the correction means applying the expected weight for each situation to the required tmpC value.

For each correction item, refer to the table below.

Item
Description
Range
Default Value
Transactions per minuteSum of transaction estimates per minute on the target server-

-

Basic tmpC correctionCorrection to apply the tmpC value measured in the optical environment-5
Peak-time load correctionCorrection for peak time so that the system can operate properly in times of heavy load1.2 ~ 1. 51.3
Database size correctionCorrection considering the number of records in the database table and the notable database volume1.5 ~ 2.01.7
Application structure correctionCorrection for performance differences depending on the structure of the application and the required response time1.1 ~ 1.51.2
Application load correctionCorrection for the case where batch operation, etc. is performed at the same time during the peak time of online operation1.3 ~ 2.21.7
System margin correctionCorrection of margin ratio for an unexpected increase in the operation-1.3
System target utilization rateCPU utilization rate assuming stable operation of the system-0.7
Replication correction(*)Correction for Altibase replication environment-1.1
Memory DB correction(*)Correction considering the performance advantage when using only Altibase memory DB-0.2
  • If Altibase replication is not used, the 'replication correction' item is not applied.
  • Unless only the memory DB is used, the "Memory DB correction" item is not applied.

Transactions per minute

The number of transactions per minute should be calculated after calculating the number of concurrent users first as follows.

ClassificationRangeDefault valueRemark
Number of concurrent users10% ~ 30% of total users20%User increase rate and system operation deadline need to be applied
Number of transaction per user-

Number of tasks * Number of transactions per operation

 
Transactions per minute = number of concurrent users * number of operations * number of transactions per operation


Basic tpmC correction

The tmpC provided on the TPC homepage is measured in the optimal environment and is different from the actual operating environment.

Therefore, it is necessary to correct the tmpC value measured in the experimental environment to fix the real environment. This is called "basic tmpC correction" and a fixed value of 5 is applied.

Peak-time load correction

Since the system is generally subjected to a heavy load of about 20~50% during the peak time than usual, a weight of 1.2~1.5 is applied in consideration of this.

Classification
Applied value
Description
High1.5High heavy load occurs at a specific time or on a specific day
Middle1.4When a specific day exists and a heavy load occurs throughout the day
Low1.3Heavy load occurs every day or week at a specific time
Other1.2If there is a peak time, but the difference between the load and the normal is not large

Database size correction

Apply it by considering the total database size and the table with the most records.

If it is difficult to derive an accurate value, the general value of 1.7 is applied because weighting cannot be applied.

Size \ Number of cases
8 million or less
32 million or less
126 million or less
256 million or less
260 million or more
1TB or less1.701.751.801.851.90
2TB or more1.851.901.901.952.00
2TB or less1.801.851.901.951.95
50GB or less1.501.551.601.651.70
500GB or less1.601.651.701.751.80

Application structure correction

Application structure correction is a correction for performance differences depending on the structure of the application and the required response time

It is as follows, and does not apply when the expected response time is more than 5.

Response time
Within 1 sec
2 sec
3 sec
4 sec
5 sec or more
Applied value1.51.351.21.11.0

Application load correction

Application load correction is a correction that takes into account when batch operations and others are simultaneously performed during peak-time of online operations. 

If additional operations (backup, reporting, statistics, etc) are required to be processed in addition to the specified operation, the required processing power must be corrected accordingly.

The applied value is as follows, and normally, 1.7 is applied.

Classification
Applied value
Description
High1.9 ~ 2.2When there is a lot of additional operations, such as placement operation
Middle1.6 ~ 1.8When online transactions occur, some batch operations occur.
Low1.3 ~ 1.5When there are few additional operations, such as batch operation, other than online transactions.

System margin correction

This is a correction to ensure system stability in case of an unexpected increase in operation. The value is fixed at 1.3.

System target utilization correction

Generally, when designing the database system, it is designed to use 100% CPU usage, but for the stable operation, the CPU is not used 100% in an actual operating environment.

System target utilization correction is set based on the actual operating environment, In most cases, the maximum CPU usage is 70%, so the value of 0.7 is applied.

Replication correction

Applied when Altibase replication is used.

The default value of 1.1. is applied as a correction for handling Altibase replication loads.

Memory DB correction

This does not apply when both disk DB and memory DB are used or only disk DB is used.

When only memory DB is used, it performs 3 times faster than disk DB, so 1/3 = 0.3 is applied.

CPU Capacity calculation example

CPU capacity can be calculated as follows by applying the above correction factor items.

User input items
Number of users who can access the system(Ex) 1,000 people
Percentage of concurrent users (%)(Ex) 20%
Annual user growth rate(Ex) 10%
System operating deadline(Ex) 5 years
Number of operations(Ex) 5
Number of transaction per operation(Ex) 10
Calculate transaction amount
ClassificationResult valueStandard valueContents
Number of concurrent users322

1,000*(20/100)*(10/100+1)5

Total users * Percentage of concurrent users * (annual growth rate ^ system operating period)
Transaction per minute16,100322*5*10Number of concurrent users * number of tasks * number of transactions per operation
Applied correction factors
ClassificationResult valueStandard valueContents
Basic tmpC correction80,50016,100*5The fixed value of 5 is applied
Peak-time load correction104,65080,500*1.3The value of 1.3 is applied to "lower"
Database size correction177,905104,650*1.7The general value of 1.7 is applied
Application structure correction266,857177,905*1.5The value of 1.5 is applied for "within 1 second"
Application load correction453,656266,857*1.7The general value of 1.7 is applied
System margin correction589,752453,656*1.3The fixed value of 1.3 is applied
System target utilization correction412,826589,752*0.7The fixed value of 0.7 is applied
Replication correction (*)454,108412,826*1.1The fixed value of 1.1 is applied
Memory DB correction136,232454,108*0.3The fixed value of 0.3 is applied
 

Using the (*) "Replication Correction" and "Memory DB Correction" should be determined whether or not to apply according to the operating environment.

Using the final calculated value of 412,826 tpmC as above, the CPU capacity can be measured as follows.

  • 412,826 / 20,000 = 20.6

That is, 20 CPUs capable of processing 20,000 tmpC are required.

Or, the server can be determined by comparing it with the tpmC officially guaranteed by tpc.org.

(http://www.tpc.org/tpcc/results/tpcc_results.asp?print=false&orderby=tpm&sortby=desc)

 

Memory capacity calculation

 


The following three items should be considered when calculating the memory capacity.

  1. Memory DB capacity
  2. Size of buffer area for disk DB
  3. Memory for session query statement execution

Memory DB storage space 


Tables created in memory tablespaces are allocated pages in units of 32KB. One page is again divided into units called slots according to the length of the record in the table.

Therefore, one page in units of 32KB is divided into several slots according to the length of the record of the created table.

The exact size or number of slots in a page can be checked in the Altibase V$MEMTBL_INFO performance view.

Memory DB capacity calculation


The memory DB of Altibase is a form in which all data and indexes reside in memory.

Therefore, it is calculated based on the memory table schema to be created in the operating environment and the expected number of records.

It can be calculated as follows. The cord header length differs depending on the Altibase version, so be cautious when calculating the capacity.

ItemsDescription
Length of recordThe sum of the lengths of all columns in one table
Length of record header

Altibase 5

Altibase 6

24ByteAltibase 732Byte
Estimated number of recordsEstimated number of cases according to storage cycle
Example
Length of record500Byte
Length of header

Altibase 5

Altibase 6

24ByteAltibase 732Byte
Estimated number of records10,000,000 cases (per year)
Estimated size of the table (Altibase 5 / Altibase 6)(500+24) * 10,000,000 = 4,997.25MB expected
Estimated size of the index (Altibase 5 / Altibase 6)(8) * 10,000,000 = 76.29 MB expected
Estimated size of the table (Altibase 7)(500+32) * 10,000,000 = 5,073.54 MB expected
Estimated size of the index (Altibase 7)(8) * 10,000,000 = 76.29 MB estimated

Applied for correction factor considering increase in operation

(Altibase 5 / Altibase 6)

Table4,997 * 1.1 (per year) = 5,496MB
Index76 * 1.1 (per year) = 83MB

Applied for correction factor considering increase in operation

(Altibase 7)

Table5,073 * 1.1 (per year) = 5,580MB
Index76 * 1.1 (per year) = 83MB
  • The index of the memory table is calculated in the form of (number of records * 8 bytes) because only pointer information pointing to a location in real memory is managed.
    Depending on the number of memory indexes, it can be calculated using the formula '8 bytes * number of records * number of indexes'.
  • The total sum of each expected size of the table calculated by the above calculation formula can be calculated as the capacity of the memory DB. In addition, the capacity of the memory DB may be calculated in consideration of a margin ratio of 30% to 50% from the calculated capacity.
    (The required margin is further explained in the "Considering margin factor" section below.)
  • To find out the exact length of a record after creating a table, it can be clearly found by looking up MEM_SLOT_SIZE of the table in V$MEMTBL_INFO.
  • In the case of a volatile tablespace, if there is actual data, the memory area is used, so it must be included in the capacity calculation based on the above criteria.
    (All data in tables created in the volatile tablespace will be lost when the Altibase server is restarted. Transactions for tables in volatile tablespaces are provided as tablespace concept used for fast processing without recording a physical transaction log.)

Size of disk DB buffer


There is no fixed formula for calculating the size of the disk DB buffer.

However, 10% or more of the size of the disk DB that is expected to be accessed frequently among all disk DBs is recommended.

For example, if the size of frequently access data among the disk is estimated to be 100GB, set the buffer size to 10GB or more, which is 10%.

Considerations for free memory


In addition to the data, the memory size of the following areas must be considered when calculating the memory capacity.

  1. Query information and execution plan
  2. Temporary memory space for operations
  3. Separate memory margin rate by MVCC technique.

Query information and execution plan

When executing a query, Altibase performs with an execution plan establishment procedure so that the query can be executed with an internally optimized execution plan. This is very important in the performance of query processing.

However, in order to reduce the cost of establishing an execution plan for the same query each time, Altibase stores and manages query information and execution plan for queries that have already been executed internally. For this, a session memory area is required.

There are two main types of session memory areas in Altibase, the SQL_CACHE area commonly used for all sessions and the memory area allocated for each session.

Since the size of SQL_CACHE does not increase dynamically, when it is not possible to store all the execution plans for a query, it is operated by storing it in the memory area allocated to the session.

Therefore, when calculating the memory capacity, the memory space to be allocated to the session must also be considered.

Temporary memory space

When a query including group by, aggregation, etc. is executed in the memory DB, separate storage space for storing the intermediate result set is required to process the query result requested by the user from the source data. The temporary memory space refers to a separate storage space used at this time.

(In the case of a disk DB, a temporary tablespace for the disk is used by default, but the memory area can be used by using hints to improve performance.)

Consequently, it is unable to calculate this area explicitly arithmetically. This is because the degree of increase in the temporary memory space to be used is very flexible depending on the type of query, and concurrent queries.

However, since it is continuously reused after allocation, it does not significantly affect the total memory calculation if properly calculated.

Separate margin rate by MVCC technique

Altibase uses a concurrency control technique called MVCC to maximize performance by minimizing contention between two transactions when a retrieve transaction occurs while a change transaction is in progress.

In addition, to use this MVCC technique, a previous image of versioned data is temporarily stored in the table. (For more detailed information on MVCC, please refer to the "MVCC Guide".)

Therefore, when calculating the memory capacity, the margin for MVCC must also be considered.

Items to consider for margin rate
General formula
Query information memory areaNumber of queries * 1MB
Temporary memory area for operationSum of total memory table capacity * 0.1
Margin rate for MVCCSum of total memory table capacity * 0.35

Example of memory capacity calculation


Assuming that the memory table is stored for 10 years in the form of 1 million increments per year, the capacity can be calculated as follows.

In the above table, the user can check the length of the record with the following SQL statement.

Altibase aligns 8 bytes for memory resource management. That is, a size that is not divided by a multiple of 8 is managed by a larger multiple of 8.

Therefore, if the record length is 282 bytes, 288 bytes must be calculated as the length of the record according to the align rule.

If the length of the record is identified as above, it can be calculated using the following table. Separately calculate Altibase version 5, Altibase version 6, and Altibase version 7.

Altibase 5

Altibase 6

Length of record

(byte)

1 year (MB)

Maintenance period

(year/MB)

Increase in operation

Correction factor

 

Input data288 (record) + 24 (record header)1,000,000101.1 (1-year consideration)
Data capacity (288+24) * 1,000,000 = 297.54(1 year capacity * 10 years) = 2,975.42,975.4 * 1.1 = 3,272.9MB
Index capacity8*2 (1 Primary Key, 1 Index)(8*1,000,000*2) = 15.25(1 year capacity * 10 years) = 152.5152.5 * 1.1 = 167.7MB
Total3,127.93,440.6MB
Altibase 7

Length of record

(byte)

1 year

(MB)

Maintenance period

(year/MB)

Increase in operation

Correction factor

Input data288 (record) + 32 (record header)1,000,000101.1 (1-year consideration)
Data capacity (288+32) * 1,000,000 = 205.17(1 year capacity * 10 years) = 3,051.73,051.7 * 1.1 = 3,356.8MB
Index capacity8*2 (1 Primary Key, 1 Index)(8*1,000,000*2) = 15.25(1 year capacity * 10 years) = 152.5152.5 * 1.1 = 167.7MB
Total3,204.23,524.5MB

When input data is created for each table as above, it is able to calculate the capacity for data and indexes.

In addition, when the capacity of data and indexes is calculated, the total required memory capacity can be calculated as follows.

ItemCalculation data (example)
Memory DB capacity20 GB (sum of capacity for all memory tables)
Disk buffer5GB (calculated considering the size of disk DB)
Margin ratio appliedNumber of queries = 1,0001,000 * 1MB = 1GB
 20GB * 0.1 = 2GB
 20GB * 0.3 = 6GB
Estimated memory capacity20GB + 5GB + 1GB + 2GB + 6GB = 34GB

This memory capacity calculation is purely based on the capacity of Altibase. If planning to run a user application in the same server, the user must perform a separate capacity calculation for that part.

Disk capacity calculation


These four items should be considered for the disk capacity.

  1. Estimated space of memory DB
  2. Transaction log file space
  3. Estimated space of disk DB
  4. Disk DB backup space

Estimated space of memory DB


Altibase periodically stores the contents of the memory DB as a physical data file so that the changed data in the memory can be restored again even in the event of a sudden failure.

This process is called a checkpoint, and the size of the data file created in this process is created by the size of the memory DB. (Please refer to the "Altibase Checkpoint Guide" for detailed checkpoint procedures.)

At this time, the created files are created as two files, one pair each, so it requires twice the capacity of the memory DB in terms of disk capacity.

Estimated memory DB size20GB
Physical disk demand usage20GB * 2 = 40GB

Transaction log file


Altibase performs transaction logging in order to perform a recovery scheme based on the WAL (Write-Ahead Logging) protocol.

During the log recording process performed at this time, storage space is required for the physical file, and the recorded files are called online log files.

Online log files are automatically deleted when a checkpoint occurs, but in some cases, they can be kept without being deleted. This is the case in the following cases.

  • In the case of maintaining the log that the Sender should send in a replication
  • When a transaction is held for a long time due to the occurrence of a large number of changes.

It is recommended to secure as much space as possible because it is necessary to prevent the shortage of failure of disk capacity due to the continuous maintenance of online log files.

However, it is not recommended the disk space for online log files by calculation formula.

Since it is flexible according to the size of the system, it is recommended to calculate it based on data such as the number of log files created with a performance load test.

If it is difficult to perform such a test, at least 20GB of disk space is recommended.

Estimated space of disk DB


For disk DB, unlike the method for calculating the capacity of memory DB, it is recommended to calculate the page unit. This is because, depending on the properties of PCTFREE and PCTUSED for each page, there is a limitation on the storage space.

The size of an Altibase page is 8,192 bytes. Here, the space on the page calculated for the PCTFREE setting for each page is calculated as the space available for the record.

The header length of record and index is as follows. Unlike the memory DB, in the disk DB, the length of the column configured in the index must be considered in the capacity calculation.

Header length of the recordLength (1byte) + Slot Directory(2byte) + 34byte
Header length of the index10byte
  • Each column has 1 byte for length information. If the column is more than 250 bytes, it has a header of length information of 3 bytes.
  • The slot directory area to quickly know the exact offset information in the page is included in the storage space and calculated.

The disk DB capacity is calculated as follows. (The index must add up the lengths of the columns that make up the key.)

Size of the page8,192byte
PCTFREE 10% applied8,192 - (8,192*0.1) = 7,373byte
Fixed header of the pagePart used for page management = 80byte
Fixed footer of the pagePart used for page consistency check = 16byte
Size of the available page7,373 - 96 = 7,277byte

Length of the selected record

288byte
Number of records per page7,277 / (288+34) = 22
Estimated number of cases per year1,000,000 cases
Final capacity calculation1,000,000 (records) / 22 (records per page) * 8,192 = 3,551 MB

If the user gets the number of records that can fit per page, the user can use that value to get the number of pages the user needs for the expected number.

However, since a page actually used is in units of 8,192 bytes, in the final calculation, the capacity is calculated by multiplying by 8,192 bytes.

 

In the case of an index, it is calculated by applying the length of the key column constituting the index as the length of the record.

The header can use 16 bytes of the header of the index.

When the capacity of all disk tables is calculated as above, the total is calculated and the capacity is calculated by considering the margin ratio of 30% to 50%.

ItemsCalculation data (example)
Disk DB capacity500 GB (sum of capacity for disk table)
Margin ratio applied500GB * 0.3 (free space) = 150GB
Estimated disk capacity500GB + 150GB = 650GB


Undo tablespace and temporary tablespace of Disk DB


Undo tablespace

The undo tablespace of disk DB is used as space where copies of the changed image are temporary stored until the end of the transaction.

Therefore, it is difficult to predict the amount of use because the number of use changes according to the query processing requested by the user.

For example, when a transaction that updates a 1TB table at once occurs, 1TB of undo tablespace is required.

If not considering such a case, it is generally desirable to calculate about 30% of the largest table as the capacity of the undo tablespace.

Temporary tablespace

It is difficult to predict the usage of temporary tablespace as its usage varies according to the type of query.

Therefore, just like the undo tablespace, about 30% of the table with the largest capacity should be calculated as the temporary tablespace capacity.

For the undo tablespace and the temporary tablespace, sufficient space should be calculated so that there are no operational problems.

Backup space of Disk DB


For more detailed information related to backup, please refer to the "Altibase Backup and Recovery Guide" document. This section only explains the part of calculating the capacity related to backup.

Altibase must be operated in archive log mode for backup, and two disk spaces are required for this backup.

Archive log file directorySpace to store backup files of online log files
Backup spaceSpace of data files to be stored with online backup, etc.

In the case of backup space, it can be calculated as the sum of the estimated memory DB and disk DB capacity.

Since there are various types of backups such as the type of backing up the entire DB every time, the type of incremental backup, and the type of tablespace unit backup, it is recommended to calculate the appropriate space for each backup.

In the case of the archive log file directory, it is safe to delete all previous log files that are not referenced after the user completes the backup.

Therefore, it can be said to be flexible according to the backup policy because the log file capacity to be stored may differ according to the backup cycle and policy.

Let's look at the following example. In the example below, the backup policy is assumed on a daily basis.

 
Memory DB
Disk DB
Amount of online log files per day
Estimated capacity20GB500GB200GB
Space required for backup20GB500GB200GB (archive)
Space considering the last backup20*2 = 40GB500*2 = 1TB200GB * 2 = 400GB
  • Memory DB is stored as one pair (two files) at the checkpoint, but only one stable file is backed up at the backup stage, so it must be considered the backup space as much as the memory DB capacity.
  • If the disk DB is assumed to be a full backup policy, backup space is required as much as the expected disk DB capacity.
  • Since archive log files must be stored until the next backup occurs, all archive log files created during the backup cycle must be stored since the previously backed up can be used to restore the current point in time.
  • In terms of the backup cycle concept, the previous backup copy can be deleted only after the current backup is successful. If the last backup copy is kept on the disk, twice as much space for backup is required.

Considerations when using backup with iLoader


In addition to the online backup method, if it is necessary to additionally backup each table, the user can use a utility called the iLoader to back up the table as a text file.

In this case, the capacity of the data file backed up with the iLoader should be considered when calculating the disk capacity.

Generally, since a data separator of about 5 bytes should be used for each column, space obtained by the following arithmetic is required in addition to the actual expected disk DB capacity.

 

  • Separator capacity = 5 bytes * number of records * number of columns

Example> If there are 1 million data in a table with 10 columns, 5 * 10 * 1,000,000 = 50MB is additionally required. 

 

In other words, it means that the discriminator capacity is required in addition to the initially calculated disk DB capacity.

In addition, even when compressed and stored in a disk, the size of the space to be stored must be calculated in consideration of the compression ratio compared to the capacity calculated above.

This part is not added to the disk capacity calculation example below, so be sure to take this into account when users choose this backup method.

Example of disk capacity calculation


Item
Capacity
 
Disk DB data file500GB500GB
Data file of disk DBSpace required as much as disk DB capacity 
Memory DB data file20G20*2 = 40GB
Data file of memory DBMemory DB capacity * 2 space required 
Backup fileMemory DB capacity + disk DB capacity 
Backup fileMemory DB + Disk DB + Archive Log File560GB
Calculation exampleUsageRequired disk space
Archive log fileThe amount of online log files to be saved during the backup cycle 
Archive log file40GB40GB
Online log file20GB (varies according to system size) 
Online log file20GB20GB
Total space required40+500+20+40+(560*2)1,720GB

 

Network capacity calculation


The network capacity calculation part of this document only considers the replication. The user must calculate the network capacity by considering the service network, and the replication network capacity covered in this document must also be added to the total network capacity.

Packet size due to replication


In the case of sending packets in the replication, the following calculation formula is able for each transaction type.

Type
Size (byte)
INSERTHeader(28byte) + (Data length(4byte) + Data(x))
UPDATE

Header(32byte) + (PK data length(4byte) + PK data(x)) + (Data length before change(4byte) + Data before change(x)) + (Data length after change(4byte) + Data after change( x))

DELETEHeader (28byte) + (PK data length (4byte) + PK data (x))
Commit / Rollback16byte

As above, each transaction has a basic header, and the packet is sent as much as the sum of length information for each column and the length of actual data for sent data.

In a replication environment, INSERT needs to send all column data, and UPDATE only needs to send previous information and change information of the changed column data. DELETE can be processed by sending only the PK data of the data to be deleted.

Using the above calculation formula, if data input of 300 bytes in size to a replication table with 10 columns needs to be processed at 10,000 tps per second, it can be calculated as follows.

TypeExample of calculation
INSERT(28 + (4*10) + 300 + 16) * 10,000 = 3.66Mbps

- 28 bytes are used as the header of the replication send log per case

- The meaning of 4*10 means 4 bytes of length information about the actual length of each column.

- 300byte means the length of data to be sent.

-16 means Commit or Rollback log.

In the above environment, in order to meet user requirements, an environment that guarantees the network speed of 3.66Mbps based on the insert transaction is required.

In building the actual environment, correction factors for data increase or other factors should be applied. In terms of margin, it is recommended to build a network environment with at least a 30% increase.

 

As for the replication network card, a Giga-bit card is recommended, and a direct connection using a cross-cable or a private network is recommended.

However, if it is not a short distance, it is necessary to consider in advance whether the replication can be processed without delay within the available network bandwidth.

Overview


In this chapter, we have discussed how to calculate the CPU, memory, disk, and network capacity required for Altibase.

However, when estimating the capacity of a database system, there isn't much to consider for each item, so when configuring an actual system, it is desirable to find the most suitable configuration method with the review of sufficient data in advance.



  • No labels