Oracle Autonomous Health Framework

This article describes the installation and usage of the Oracle Autonomous Health Framework (AHF) on a Linux system.

As a basis we will use a RHEL 8.10 VM with an installation of Oracle 19c Grid Infrastructure (GI) and Oracle Database 19c. The installation steps are described here.

Installing Oracle Autonomous Health Framework (AHF)

We can download the latest Version of Oracle AHF from this page on My Oracle Support. In my case it is Version 25.3 (AHF-LINUX_v25.3.0.zip). We also need a current version of the Cluster Verification Utility (CVU) from here (Patch 30839369: Standalone CVU (OL8+, RHEL8+) January 2025). Next we start the installation:

# run as the root user
unzip -q /sw/AHF-LINUX_v25.3.0.zip -d ~/ahf
cd ~/ahf; ./ahf_setup -silent -data_dir /opt/oracle.ahf
# install CVU
unzip -q /sw/cvupack_linux_ol8_x86_64.zip -d /opt/oracle.ahf/common/cvu
Sample Output:
[root@lin2 ahf]# unzip -q /sw/AHF-LINUX_v25.3.0.zip -d ~/ahf
[root@lin2 ahf]# cd ~/ahf; ./ahf_setup -silent -data_dir /opt/oracle.ahf
AHF Installer for Platform Linux Architecture x86_64

AHF Installation Log : /tmp/ahf_install_253000_26374_2025_04_24-18_56_55.log

Starting Autonomous Health Framework (AHF) Installation

AHF Version: 25.3.0 Build Date: 202503270355

AHF Location : /opt/oracle.ahf

AHF Data Directory : /opt/oracle.ahf/data

Extracting AHF to /opt/oracle.ahf

Setting up AHF CLI and SDK

Setting up compliance autoruns from AHF

Configuring TFA Services

Discovering Nodes and Oracle Resources

Not generating certificates as GI discovered

Starting TFA Services
Created symlink /etc/systemd/system/multi-user.target.wants/oracle-tfa.service -> /etc/systemd/system/oracle-tfa.service.
Created symlink /etc/systemd/system/graphical.target.wants/oracle-tfa.service -> /etc/systemd/system/oracle-tfa.service.

.--------------------------------------------------------------------------.
| Host | Status of TFA | PID   | Port | Version    | Build ID              |
+------+---------------+-------+------+------------+-----------------------+
| lin2 | RUNNING       | 28294 | 5000 | 25.3.0.0.0 | 250300020250327035530 |
'------+---------------+-------+------+------------+-----------------------'

Running TFA Inventory...

.---------------------------------------------------.
|            Summary of AHF Configuration           |
+-----------------+---------------------------------+
| Parameter       | Value                           |
+-----------------+---------------------------------+
| AHF Location    | /opt/oracle.ahf                 |
| TFA Location    | /opt/oracle.ahf/tfa             |
| Orachk Location | /opt/oracle.ahf/orachk          |
| Data Directory  | /opt/oracle.ahf/data            |
| Repository      | /opt/oracle.ahf/data/repository |
| Diag Directory  | /opt/oracle.ahf/data/lin2/diag  |
'-----------------+---------------------------------'

AHF binaries are available in /opt/oracle.ahf/bin

AHF is successfully Installed

Moving /tmp/ahf_install_253000_26374_2025_04_24-18_56_55.log to /opt/oracle.ahf/data/lin2/diag/ahf/
[root@lin2 ahf]# unzip -q /sw/cvupack_linux_ol8_x86_64.zip -d /opt/oracle.ahf/common/cvu
[root@lin2 ahf]# 

Running Health Checks with ORAchk

ORAchk can create a html health check report of the installed components:

echo y|orachk -dball
Sample Output
[root@lin2 ahf]# echo y|orachk -dball
Clusterware stack is running from /u01/app/19.0.0/grid. Is this the correct Clusterware Home?[y/n][y]
List of running databases

1. orcl
2. None of above

.  .
.  .

Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS

.  .  . . . .
.  .  . . . .  .  .  .  .  .  .  .  .  .  .  .  .
-------------------------------------------------------------------------------------------------------
                                                 Oracle Stack Status
-------------------------------------------------------------------------------------------------------
  Host Name       CRS Installed       ASM HOME  RDBMS Installed    CRS UP       ASM UP  RDBMS UP    DB Instance Name
-------------------------------------------------------------------------------------------------------
       lin2                       Yes          Yes          Yes          Yes      Yes      Yes                orcl
-------------------------------------------------------------------------------------------------------


Copying plug-ins

. .
.  .  .  .  .  .

*** Checking Best Practice Recommendations ( Pass / Warning / Fail ) ***

.

============================================================
                   Node name - lin2
============================================================
. . . . . .
 Collecting - ASM Disk Groups
 Collecting - ASM Disk I/O stats
 Collecting - ASM Diskgroup Attributes
 Collecting - ASM disk partnership imbalance
 Collecting - ASM diskgroup attributes
 Collecting - ASM diskgroup usable free space
 Collecting - ASM initialization parameters
 Collecting - Active sessions load balance for orcl database
 Collecting - Archived Destination Status for orcl database
 Collecting - Cluster Interconnect Config for orcl database
 Collecting - Database Archive Destinations for orcl database
 Collecting - Database Files for orcl database
 Collecting - Database Instance Settings for orcl database
 Collecting - Database Parameters for orcl database
 Collecting - Database Properties for orcl database
 Collecting - Database Registry for orcl database
 Collecting - Database Sequences for orcl database
 Collecting - Database Undocumented Parameters for orcl database
 Collecting - Database Undocumented Parameters for orcl database
 Collecting - Database Workload Services for orcl database
 Collecting - Dataguard Status for orcl database
 Collecting - Files not opened by ASM
 Collecting - Log Sequence Numbers for orcl database
 Collecting - Percentage of asm disk  Imbalance
 Collecting - Process for shipping Redo to standby for orcl database
 Collecting - Redo Log information for orcl database
 Collecting - Standby redo log creation status before switchover for orcl database
 Collecting - /proc/cmdline
 Collecting - /proc/modules
 Collecting - CPU Information
 Collecting - CRS active version
 Collecting - CRS oifcfg
 Collecting - CRS software version
 Collecting - CSS Reboot time
 Collecting - Cluster interconnect (clusterware)
 Collecting - Clusterware OCR healthcheck
 Collecting - Clusterware Resource Status
 Collecting - Disk I/O Scheduler on Linux
 Collecting - DiskFree Information
 Collecting - DiskMount Information
 Collecting - Huge pages configuration
 Collecting - Interconnect network card speed
 Collecting - Kernel parameters
 Collecting - Linux module config.
 Collecting - Maximum number of semaphore sets on system
 Collecting - Maximum number of semaphores on system
 Collecting - Maximum number of semaphores per semaphore set
 Collecting - Memory Information
 Collecting - Monthly recommended patches for Grid Infrastructure
 Collecting - NUMA Configuration
 Collecting - Network Interface Configuration
 Collecting - Network Performance
 Collecting - Network Service Switch
 Collecting - OS Packages
 Collecting - OS version
 Collecting - Operating system release information and kernel version
 Collecting - Oracle executable attributes
 Collecting - Patches for Grid Infrastructure
 Collecting - Patches for RDBMS Home
 Collecting - Patches xml for Grid Infrastructure
 Collecting - Patches xml for RDBMS Home
 Collecting - RDBMS and GRID software owner UID across cluster
 Collecting - RDBMS patch inventory
 Collecting - Shared memory segments
 Collecting - Table of file system defaults
 Collecting - Voting disks (clusterware)
 Collecting - number of semaphore operations per semop system call
 Collecting - CRS Opatch version
 Collecting - CRS user time zone check
 Collecting - Clusterware patch inventory
 Collecting - Collect Data Guard TFA Data
 Collecting - Custom rc init scripts (rc.local)
 Collecting - Database Server Infrastructure Software and Configuration
 Collecting - Database details for Infrastructure
 Collecting - Disk Information
 Collecting - Grid Infastructure user shell limits configuration
 Collecting - Interconnect interface config
 Collecting - Linux system service and RAC process configuration
 Collecting - Network interface stats
 Collecting - Root user limits
 Collecting - SHMAnalyzer to report potential Operating system resources usage
 Collecting - Verify ORAchk scheduler configuration
 Collecting - Verify TCP Selective Acknowledgement is enabled
 Collecting - Verify no database server kernel out of memory errors
 Collecting - Verify the vm.min_free_kbytes configuration
 Collecting - root time zone check
 Collecting - slabinfo
 Collecting - umask setting for GI owner

Data collections completed. Checking best practices on lin2.
------------------------------------------------------------

 INFO =>     ASM Important INFO
 INFO =>     Oracle database unified auditing recommendation for orcl
 WARNING =>  Dedicated Tablespace for Unified Audit Trail for orcl
 INFO =>     Oracle Data Pump Best practices
 WARNING =>  Linux Swap Size
 WARNING =>  Monitoring stale statistics for orcl
 INFO =>     Most recent ADR incidents for /u01/app/oracle/product/19.0.0/dbhome_1
 FAIL =>     Verify Database Memory Allocation
 CRITICAL => Verify minimum memory used to reassemble IP fragments
 CRITICAL => Verify maximum memory used to reassemble IP fragments
 INFO =>     Oracle GoldenGate failure prevention best practices
 INFO =>     SHMAnalyzer to report potential Operating system resources usage
 WARNING =>  Oracle database software owner soft stack shell limits
 UNDETERMINED =>     Verify share memory segment persistence at logout
 CRITICAL => Verify the vm.min_free_kbytes configuration
 WARNING =>  Archive log mode for orcl
 CRITICAL => Verify ORAchk scheduler configuration across cluster
 CRITICAL => Verify temporary location is not configured for auto cleanup
 WARNING =>  Primary database protection with Data Guard for orcl
 WARNING =>  Flashback database on primary for orcl
 INFO =>     Storage Minimum Requirements for Grid & Database Homes
 CRITICAL => Verify operating system hugepages count satisfies total SGA requirements
 WARNING =>  Interconnect NIC bonding config.
 WARNING =>  VIP NIC bonding config.
 WARNING =>  ASM disk group compatible.rdbms attribute
 FAIL =>     Verify the Alternate Archive Destination is Configured to Prevent Database Hangs for orcl
 WARNING =>  Check for Patch 30937410 /u01/app/oracle/product/19.0.0/dbhome_1
 FAIL =>     Check for Parameter db_lost_write_protect on orcl instance
 FAIL =>     Database init parameter DB_BLOCK_CHECKING on primary for orcl
 INFO =>     Operational Best Practices
 INFO =>     Database Consolidation Best Practices
 INFO =>     Computer failure prevention best practices
 INFO =>     Data corruption prevention best practices
 INFO =>     Logical corruption prevention best practices
 INFO =>     Database/Cluster/Site failure prevention best practices
 INFO =>     Client failover operational best practices
 WARNING =>  Check for Parameter fast_start_mttr_target on orcl instance
 INFO =>     Hang and Deadlock material
 WARNING =>  cgroup setting for critical database background processes for orcl
 WARNING =>  VKTM priority for orcl
 INFO =>     Database failure prevention best practices
 WARNING =>  Archivelog Mode for orcl
 WARNING =>  Check for Patch 33912872 /u01/app/oracle/product/19.0.0/dbhome_1
 WARNING =>  Check for Patch 33912872 /u01/app/19.0.0/grid
 INFO =>     Optimizer bug fixes with disabled fix control In 19c for orcl
 INFO =>     Software maintenance best practices
 CRITICAL => Verify transparent hugepages are disabled
 INFO =>     Oracle recovery manager(rman) best practices
 INFO =>     Database feature usage statistics for orcl
 WARNING =>  Disk I/O Scheduler on Linux
 WARNING =>  Monitoring changes to schema objects for orcl
 WARNING =>  session_cached_cursors parameter for orcl
 WARNING =>  Check for tainted kernel by non-Oracle modules and 3rd party security software installed from package
------------------------------------------------------------

UPLOAD [if required] - /opt/oracle.ahf/data/lin2/orachk/user_root/output/orachk_lin2_orcl_042525_083837.zip

[root@lin2 ahf]#

The created zip file contains a nice html report with recommendations to fix the identified issues.

Further information