NetworkMiner is an open source network forensics tool that extracts artifacts, such as files, images, emails and passwords, from captured network traffic in PCAP files. NetworkMiner can also be used to capture live network traffic by sniffing a network interface. Detailed information about each IP address in the analyzed network traffic is aggregated to a network host inventory, which can be used for passive asset discovery as well as to get an overview of which devices that are communicating. NetworkMiner is primarily designed to run in Windows, but can also be used in Linux.
NetworkMiner has, since the first release in 2007, become a popular tool among incident response teams as well as law enforcement. NetworkMiner is today used by companies and organizations all over the world.
User credentials (usernames and passwords) for supported protocols are extracted by NetworkMiner and displayed under the "Credentials" tab. The credentials tab sometimes also shows information that can be used to identify a particular person, such as user accounts for popular online services like Gmail or Facebook.
Another very useful feature is that the user can search sniffed or stored data for keywords. NetworkMiner allows the user to insert arbitrary string or byte-patterns that shall be searched for with the keyword search functionality.
NetworkMiner Professional can be delivered either as an Electronic Software Download (ESD) or shipped physically on a USB flash drive. The product is exactly the same, regardless of delivery method. NetworkMiner is a portable application that doesn't require any installation, which means that the USB version can be run directly from the USB flash drive. However, we recommend that you copy NetworkMiner to the local hard drive of your computer in order to achieve maximum performance.
Install Mono (cross platform, open source .NET framework), download and extract NetworkMiner and then start NetworkMiner with mono NetworkMiner.exe.For more details, please see our HowTo install NetworkMiner in Ubuntu Fedora and Arch Linux blog post.
The support for Mono on macOS is very limited, but you can try the following solution:Install Mono with "brew install mono", download and extract NetworkMiner and then start NetworkMiner with "mono --arch=32 NetworkMiner.exe".For more details, please see our Running NetworkMiner on Mac OS X blog post.
To sniff with raw sockets you'll first need to create a firewall rule to allow NetworkMiner to capture incoming TCP packets. Run the command "wf.msc" to start Windows Defender Firewall and create a new inbound rule for NetworkMiner.exe. Next, start NetworkMiner as administrator and select a network interface in the drop down list at the top of the GUI. Finally, start a live packet capture by clicking the Start button.
NetworkMiner is not designed to perform decryption, so files transferred inside TLS encrypted sessions, like HTTPS, will not be extracted. X.509 certificates from TLS handshakes will be extracted to disk by NetworkMiner though. You can use a TLS proxy, like PolarProxy, in order to decrypt TLS traffic and forward decrypted traffic to NetworkMiner. See our video PolarProxy in Windows Sandbox for more details.
You can use Secure Sockets Layer (SSL) to encrypt connections between your Oracle endpoint and your replication instance. For more information on using SSL with an Oracle endpoint, see SSL support for an Oracle endpoint.
AWS DMS supports the use of Oracle transparent data encryption (TDE) to encrypt data at rest in the source database. For more information on using Oracle TDE with an Oracle source endpoint, see Supported encryption methods for using Oracle as a source for AWS DMS.
To create a task that handles change data capture (a CDC-only or full-load and CDC task), choose Oracle LogMiner or AWS DMS Binary Reader to capture data changes. Choosing LogMiner or Binary Reader determines some of the later permissions and configuration options. For a comparison of LogMiner and Binary Reader, see the following section.
In AWS DMS, there are two methods for reading the redo logs when doing change data capture (CDC) for Oracle as a source: Oracle LogMiner and AWS DMS Binary Reader. LogMiner is an Oracle API to read the online redo logs and archived redo log files. Binary Reader is an AWS DMS method that reads and parses the raw redo log files directly. These methods have the following features.
AWS DMS supports transparent data encryption (TDE) methods when working with an Oracle source database. If the TDE credentials you specify are incorrect, the AWS DMS migration task doesn't fail, which can impact ongoing replication of encrypted tables. For more information about specifying TDE credentials, see Supported encryption methods for using Oracle as a source for AWS DMS.
For migrations with a high volume of changes, LogMiner might have some I/O or CPU impact on the computer hosting the Oracle source database. Binary Reader has less chance of having I/O or CPU impact because logs are mined directly rather than making multiple database queries.
For an Oracle source endpoint to connect to the database for a change data capture (CDC) task, you might need to specify extra connection attributes. This can be true for either a full-load and CDC task or for a CDC-only task. The extra connection attributes that you specify depend on the method you use to access the redo logs: Oracle LogMiner or AWS DMS Binary Reader.
You specify extra connection attributes when you create a source endpoint. If you have multiple connection attribute settings, separate them from each other by semicolons with no additional white space (for example, oneSetting;thenAnother).
Where the Oracle source uses ASM, you can work with high-performance options in Binary Reader for transaction processing at scale. These options include extra connection attributes to specify the number of parallel threads (parallelASMReadThreads) and the number of read-ahead buffers (readAheadBlocks). Setting these attributes together can significantly improve the performance of the CDC task. The following settings provide good results for most ASM configurations.
In addition, the performance of a CDC task with an Oracle source that uses ASM depends on other settings that you choose. These settings include your AWS DMS extra connection attributes and the SQL settings to configure the Oracle source. For more information on extra connection attributes for an Oracle source using ASM, see Endpoint settings when using Oracle as a source for AWS DMS
You also need to choose an appropriate CDC start point. Typically when you do this, you want to identify the point of transaction processing that captures the earliest open transaction to begin CDC from. Otherwise, the CDC task can miss earlier open transactions. For an Oracle source database, you can choose a CDC native start point based on the Oracle system change number (SCN) to identify this earliest open transaction. For more information, see Performing replication starting from a CDC start point.
For more information on configuring CDC for a self-managed Oracle database as a source, see Account privileges required when using Oracle LogMiner to access the redo logs, Account privileges required when using AWS DMS Binary Reader to access the redo logs, and Additional account privileges required when using Binary Reader with Oracle ASM.
For more information on configuring CDC for an AWS-managed Oracle database as a source, see Configuring a CDC task to use Binary Reader with an RDS for Oracle source for AWS DMS and Using an Amazon RDS Oracle Standby (read replica) as a source with Binary Reader for CDC in AWS DMS.
A self-managed database is a database that you configure and control, either a local on-premises database instance or a database on Amazon EC2. Following, you can find out about the privileges and configurations you need when using a self-managed Oracle database with AWS DMS.
Create a database link named, AWSDMS_DBLINK on the primary database. DMS_USER will use the database link to connect to the primary database. Note that the database link is executed from the standby instance to query the open transactions running on the primary database. See the following example.
Here, name, value, and description are columns somewhere in the database that are being queried based on the value of name. If this query runs without error, AWS DMS supports the current version of the database and you can continue with the migration. If the query raises an error, AWS DMS doesn't support the current version of the database. To proceed with migration, first convert the Oracle database to an version supported by AWS DMS.
You can run Oracle in two different modes: the ARCHIVELOG mode and the NOARCHIVELOG mode. To run a CDC task, run the database in ARCHIVELOG mode. To know if the database is in ARCHIVELOG mode, execute the following query.
To capture ongoing changes, AWS DMS requires that you enable minimal supplemental logging on your Oracle source database. In addition, you need to enable supplemental logging on each replicated table in the database.
You can disable the default PRIMARY KEY supplemental logging added by AWS DMS using the extra connection attribute addSupplementalLogging. For more information, see Endpoint settings when using Oracle as a source for AWS DMS.
If a primary key exists, add supplemental logging for the primary key. You can do this either by using the format to add supplemental logging on the primary key itself, or by adding supplemental logging on the primary key columns on the database.
In some cases, the target table primary key or unique index is different than the source table primary key or unique index. In such cases, add supplemental logging manually on the source table columns that make up the target table primary key or unique index.
If the table has a unique index or a primary key, add supplemental logging on each column that is involved in a filter or transformation. However, do so only if those columns are different from the primary key or unique index columns.