Documentation version 18.19.1.0 was released February 27, 2024.
Every effort was made to ensure that the information in this manual was accurate at the time of printing. However, information is subject to change without notice, and VIAVI Solutions reserves the right to provide an addendum to this manual with information not available at the time that this manual was created.
© Copyright 2024 VIAVI Solutions. All rights reserved. VIAVI and the VIAVI logo are trademarks of VIAVI Solutions Inc. ("VIAVI"). All other trademarks and registered trademarks are the property of their respective owners. No part of this guide may be reproduced or transmitted, electronically or otherwise, without the written permission of the publisher.
Specifications, terms, and conditions are subject to change without notice. The provision of hardware, services, and/or software are subject to VIAVI standard terms and conditions, available at www.viavisolutions.com/terms. Specifications, terms, and conditions are subject to change without notice. All trademarks and registered trademarks are the property of their respective companies.
See Dashboards > Events for more.
(Taken from Wikipedia.)
GigaFlow supports NSEL, sFlow v5, jFlow, IPFIX, Netflow v5 and Netflow v9.
To manually set the sample rate for a device, go to Configuration > Infrastructure Devices > Detailed Device Information and Storage Settings.
At this point, both the client and server have received an acknowledgment of the connection. The steps 1, 2 establish the connection parameter (sequence number) for one direction and it is acknowledged. The steps 2, 3 establish the connection parameter (sequence number) for the other direction and it is acknowledged. With these, a full-duplex communication is established. See Transmission Control Protocol at Wikipedia.
This document consists of two main parts: (i) the Observer GigaFlow How-To Guide and (ii) the Reference Manual for the user interface.
The How-To Guide contains useful work-throughs for many common GigaFlow tasks.
This is the official reference manual for the GigaFlow UI, included with every distribution. The Reference Manual provides detailed explanations for each GigaFlow function acccessible through the user interface.
The GigaFlow Wiki contains additional detailed material including useful scripts, detailed installation instructions and frequently updated notes. Check here if you have not found what you are looking for in the How-To Guide or the Reference Manual.
GigaFlow is Viavi's Netflow collection and monitoring platform. GigaFlow helps you regain control of your IT network, giving you network-based security and application assurance.
From a performance perspective, GigaFlow allows you to:
From a security perspective, GigaFlow allows you to:
GigaFlow's benefits include:
GigaFlow's core components are:
GigaFlow supports NSEL, sFlow v5, jFlow, IPFIX, Netflow v5 and Netflow v9.
GigaFlow requires Eclipse-Temurin Java 17 - OpenJDK and PostgreSQL Database 11 (for versions 18.16.0.0 to 18.18.0.0).
GigaFlow requires Eclipse-Temurin Java 17 - OpenJDK and PostgreSQL Database 16.1 (for versions 18.19.0.0 and newer).
The recommended platform for GigaFlow is Windows Server 2016, English Edition. Installation must be performed using a local administrator account, not a domain administrator account.
GigaFlow will install:
NOTE: Due to changes in Oracle licensing, VIAVI has replaced Oracle Java in GigaFlow v18.16.0.0. Please note that while this new GigaFlow release no longer uses Oracle Java, the installation procedure will not remove the Oracle Java runtime software since other applications may still depend upon it. If you wish to uninstall Oracle Java, please refer to the instructions found in Removing Oracle Java.
Legacy GigaFlow versions 18.15.0.0 or older require the use of Oracle Java version 1.8_202 JRE (Java Runtime Environment), which may require a payment of additional license fees to Oracle. If you continue to use these legacy GigaFlow versions, please check directly with Oracle regarding licensing requirements/fees for Oracle Java.
When complete, a new service called GigaFlow will be running and enabled when the server starts up. On running the GigaFlow installer, the installation proceeds as follows:
1. This is the GigaFlow Setup welcome page:
Click Next to continue.
2. On the next page, you can choose whether or not to install start menu shortcuts:
Click Next to continue.
3. The next page displays the GigaFlow End User License Agreement (EULA); you must agree to the terms of the EULA to continue the installation or to use the product.
Click I Agree to continue.
4. On the next page, you are reminded of the system requirements for GigaFlow:
Click OK to continue.
5. After accepting the license agreement and acknowledging the reminder, you can define:
Click Next to continue.
6. Select a location for the GigaFlow application files. Again, the path name must not include spaces.
Click Install to continue.
7. When the installation is complete, you can click Show Details to see what the installer did.
Click Next to continue.
8. The installation is now finished. With the Run GigaFlow tick box checked, the installer will open the GigaFlow User Interface in your default browser when you click Finish.
9. Your browser will open at the GigaFlow login page. The default credentials are:
Below the login panel, you can see your GigaFlow installation version details.
10. After logging in for the first time, the Quick Start Settings page appears. You are prompted here to configure the minimal essential settings for GigaFlow to start working properly:
D:/Data
.Note that you can also see and configure all these settings separately from the System menu, or by clicking the links provided next to each setting name.
If you want to stop seeing the Quick Start Setting page each time you log in, enable the option Do Not Show Quick Start On Startup. This setting is also available from System > Global.
Click Save when finished.
You are ready to use GigaFlow.
See the GigaFlow Wiki for further installation notes.
Figure: GigaFlow's login screen
Log in to GigaFlow from any browser using your credentials. GigaFlow will open on the Dashboards welcome screen.
See Log in in the Reference Manual.
After logging in for the first time, the Quick Start Settings page appears. You are prompted to configure the minimal essential settings for GigaFlow to start working.
You can also configure the rest of the settings as explained below.
Enter a name for your GigaFlow server. This does not have to be the hostname. This will be used in the title bar and when making calls back to the Viavi home server.
See System > Global in the Reference Manual for more. You can change the server name at any time by navigating to System > Global > General.
Specify the local drive that you want to monitor, for example: D:/Data
.
See also System > Global in the Reference Manual for more. You can change the storage settings at any time by navigating to System > Global > Storage.
Enter how much free space to keep on disk before forensic data is aged out. This is the minimum free space to maintain on your disk before forensic data is overwritten. The default value is 20 GB.
See also System > Global in the Reference Manual for more. You can change the storage settings at any time by navigating to System > Global > Storage.
Add new listener ports, by filling-in the port details and clicking the + icon.
See also System > Receivers in the Reference Manual for more. You can change the storage settings at any time by navigating to System > Receivers.
Enable GigaFlow's call home functionality by configuring the proxy server, if one exists, at System > Global> Proxy. This information is required for several reasons: (i) to allow GigaFlow to call home and register itself; (ii) to let us know that your installation is healthy and working; (iii) to update blacklists. All calls home are in cleartext. See System > Licences for more. The required information is:
See also System > Global in the Reference Manual for more.
You must enter a valid license key to use GigaFlow. Enter your license key in the text box at the end of the table at System > Licenses. Click ADD to submit.
See also System > Licenses in the Reference Manual.
To begin monitoring events on your network, subscribe to one or more blacklists at System > Watchlists.
See also System > Watchlists in the Reference Manual.
To poll your network infrastructure devices, you must provide GigaFlow with SNMP community strings and authentication details. You can do this at System > Global.
Enter the community name in the text box and click Save.
See also System > Global in the Reference Manual.
First Packet Response (FPR) is a useful diagnostic tool, allowing you to compare the difference between the first packet time-stamp of a request flow and the first packet time-stamp of the corresponding response flow from a server. By comparing the FPR of a transaction with historical data, you can troubleshoot unusual application performance.
To begin using First Packet Response, you must specify the server subnets that you would like to monitor. Navigate to Configuration > Server Subnets.
Select the Server Subnets tab and enter a server subnet of interest. To add a new server subnet:
By clicking on the Servers tab, you can view a list of the identified servers that will be monitored. You can see a realtime display of the First Packet Response by clicking to enable the ticker beside each server.
By clicking on the Devices tab, you can view a list of infrastructure devices - routers - associated with the server subnets of interest.
Reports for all active monitored servers can be viewed at Reports > First Packet Response.
See also Use First Packet Response to Understand Application Behaviour in the How-to section and Configuration > Server Subnets and Reports > First Packet Response in the Reference Manual for more.
The following procedure helps you upgrade to a GigaFlow that contains the PostgreSQL 16.1 version.
This procedure assumes the following default locations for PostgreSQL 16.1:
If neccesary, then please modify these locations accordingly.
Note: For data migration from PostgreSQL 11 (or a version earlier than PostgreSQL 16.1) to PostgreSQL 16.1, the data folders must reside on the same drive.
The upgrade process to PostgreSQL 16.1 requires to migrate the data in the old cluster version to the new major version. This happens outside the GigaFlow installation and to prevent data loss it is highly recommended to backup the original data. The following command will dump the existing PostgreSQL data to an SQL file that is used to restore the data in case of upgrade failures. In the administrator command prompt shell, run the following command:
Note: To restore data that was previously saved using the
Note: This will not install the latest PostgreSQL 16.1 version.
Install PostgreSQL version 16.1
Note: The data folder for PostgreSQL 16.1 must reside in the same drive as that of the previous PostgreSQL version.
Note: The old cluster does not need to be deleted.
Caution: If PostgreSQL 11 is used for GigaFlow and other applications, then do not upgrade to PostgreSQL 16.1. This will upgrade the old cluster to the newer version and make it unusable with PostgreSQL 11 version.
Caution: If GigaFlow is set to connect to a remote PostgreSQL installation, then do not upgrade to PostgreSQL 16.1.
Note: You can ignore the following message: The system cannot find the file specified.
Note: Let the migration script to run until completion. The completion of this process requires a time period from 10 minutes to several hours, depending on the size of the data that is migrated. When the data migration is successful the message Data migration complete is displayed.
If the server has internet access, then you can run the following command, which will download the latest build and install it (causing GigaFlow service to restart):
Upgrading to PostgreSQL 16.1
Note: The parameter 11 represents the PostgreSQL version that it upgrades from.
Note: The old cluster does not need to be deleted.
Caution: If PostgreSQL 11 is used for GigaFlow and other applications, then do not run the ObserverGigaFlow_Upgrade_PG16.sh
script. This will upgrade the old cluster to the newer version and make it unusable with PostgreSQL 11 version.
Caution: If GigaFlow is set to connect to a remote PostgreSQL installation, then do not run the ObserverGigaFlow_Upgrade_PG16.sh
script.
Upgrading to PostgreSQL 16.1
Note: The parameter 11 represents the PostgreSQL version that it upgrades from.
Note: The old cluster does not need to be deleted.
Caution: If PostgreSQL 11 is used for GigaFlow and other applications, then do not run the ObserverGigaFlow_Upgrade_PG16.sh
script. This will upgrade the old cluster to the newer version and make it unusable with PostgreSQL 11 version.
Caution: If GigaFlow is set to connect to a remote PostgreSQL installation, then do not run the ObserverGigaFlow_Upgrade_PG16.sh
script.
Starting with GigaFlow version 18.22.2.0 and newer, the GigaFlow installer will no longer include a separate PostgreSQL installer. Instead, the PostgreSQL binaries are now bundled directly within the GigaFlow installer. Specifically, PostgreSQL version 16.1 is included with GigaFlow version 18.22.2.0. The PostgreSQL service operates on the dedicated port 26906 and is registered under the service name GFPostgresSvc. This change simplifies the installation process because it eliminates the need for a separate PostgreSQL installation.
Upgrading from a release prior to GigaFlow version 18.19.0.0
If you are upgrading GigaFlow from a release prior to version 18.19.0.0 that is running a PostgreSQL version less than 16, then the system will be upgraded to PostgreSQL version 16.1. Additionally, your old data will automatically be migrated to this new version.
For details, refer to the Data Migration section.
Upgrading from GigaFlow version 18.19.0.0 release and newer
There are 3 possible upgrade scenarios:
In scenarios 1 and 2, when you install GigaFlow version 18.22.2.0, GigaFlow will be set up with a managed PostgreSQL version 16.1. The existing PostgreSQL service will be deregistered, and a new PostgreSQL service
named
Note: The new PostgreSQL service will utilize the same data folder as the previous one. This entire process will be transparent to the user, ensuring a seamless transition.
Locally stored PostgreSQL 11 databases will be migrated to PostgreSQL 16.1 automatically. Depending on the size of your database the migration process can take a long time to complete. The newly managed PostgreSQL database will not support external applications using the database. If this is your situation, then migrate your data as needed.
To estimate the size of the database, perform the following steps:
Note: The part
The duration of the data migration process can be shortened by decreasing the retention period of the forensics data (see Reducing the size of the GigaFlow database prior to installation section).
Note: You must complete this process before installing GigaFlow.
The following table provides guidance for data migration times on Windows. Your times may vary depending on numerous factors.
Data Size | Data Migration Time |
---|---|
~6 TB | 10 min |
~12 TB | ~2 h |
The following table provides guidance for data migration times on Linux. Your times may vary depending on numerous factors.
Data Size | Data Migration Time |
---|---|
~5 TB | 3 min |
Reducing the size of the GigaFlow database prior to installation
To reduce the size of the GigaFlow database, perform the following steps:
To validate that the storage size is smaller and within acceptable limits, repeat the steps in the Estimating the database size section.
Note: You can restore the number of storage days to their previous configuration after the installation of GigaFlow 18.22.2.0 or newer is complete.
First go to https://update.viavisolutions.com/latest/, download the file ObserverGigaFlowSetupx64.exe and run it.
During the installation of GigaFlow 18.22.2.0 or newer, you will be prompted with different dialogs depending on what version you are upgrading from and different possible scenarios.
If multiple PostgreSQL versions are detected, then GigaFlow will also prompt you with the following screen:
With respect to the scenarios presented in section Upgrading from GigaFlow version 18.19.0.0 release and newer, select the options as follows:
Windows data migration
When PostgreSQL 11 option is selected for scenario 3, or you are upgrading from a version prior to GigaFlow 18.19.0.0, the following dialog regarding Data Migration is shown:
If you are concerned about the duration of the data migration process, do not click the Next button. Instead, click the Cancel button to exit the setup and follow the steps in the Estimating the database size section. If this is not your case, then click the Next button to continue the installaion process.
Data migration is monitored every 30 seconds and is displayed in a separate command window. A new logged entry is displayed every 30 seconds until the data migration process is completed. This status window will automatically close at that time.
Logging
Data migration is logged in the file
This procedure is valid for both Oracle Linux 8 and RedHat Enterprise Linux 8.
First go to https://update.viavisolutions.com/latest/ and download the tar file ObserverGigaFlowUnixx64_rpm.tgz.
To untar the file, run the following command:
The tar file contains the following files:
Clean Install
To install GigaFlow on a new Linux box, run the script using the following command:
This will install the dependencies and the GigaFlow software, and will start the new PostgreSQL version 16.1 service GFPostgresSvc on port 26906.
At the end of the installation output to terminal ignore the following error message:
ERROR: extension "plpgsql" already exists
The default data folder for the managed PostgreSQL on Linux is /opt/ros/pgsql/data
. If the default data location needs to be changed, run the following command:
<new_data_folder_path>
The GFPostgresSvc service will start using the new data folder. The default data location will be linked to the new folder, so no other changes are needed.
Upgrade
Before upgrading GigaFlow, make sure that the postgresql service is running.
For upgrades from GigaFlow versions 18.21.0.0 and prior, run the following command:
This will upgrade GigaFlow to the latest version 18.22.2.0. A managed PostgreSQL service GFPostgresSvc is started that will run on port 26906.
If the previous version of PostgreSQL is less than 16 the old data is migrated to the new version. Refer to the Data Migration section for data migration times and steps to reduce the time. The file /tmp/gigaflow/dm.log displays the progress message Data Migration in Progress during the data migration process.
Note: If the previous version of PostgreSQL is already on version 16, then the new GFPostgresSvc service will use the same data folder and with the default data folder /opt/ros/pgsql/data
linked to it.
Logging
For the logging information of the installation process, refer to the file /tmp/gigaflow/gigaflow.log.
If GigaFlow is installed and connected to a PostgreSQL running remotely, then no data migration is performed by the installation process. You need to do the data migration manually.
The installation process will update GigaFlow and start the managed PostgreSQL locally on port 26906.
Note: GigaFlow version 18.22.2.0 will be the last release that supports remote PostgreSQL.
The same message as above is displayed on the Linux terminal when the ObserverGigaFlow_Install.sh script is run with the upgrade parameter.
If applications other than GigaFlow have databases in the same PostgreSQL service that is used by GigaFlow, and a data migration is performed, then only the GigaFlow database will be migrated causing the other databases to be deleted. If this is detected, then the following warning shows.
The same message as above is displayed on the Linux terminal when the ObserverGigaFlow_Install.sh script is run with the upgrade parameter. You will have the choice to continue the installation process or to exit.
The Windows installer for GigaFlow 18.22.2.0 which contains the GigaFlow managed PostgreSQL binaries version 16.1 will perform the data migration if the version on the system that is upgraded is prior than 16.1. This is different from the process related to version 18.21.0.0 and prior, where you needed to manually install PostgreSQL 16.1 from a PostgreSQL Windows installer in the dist folder.
PostgreSQL service status
Make sure that the PostgreSQL service is running before you start the installation, since the scripts use the running service to query the data folder and the version.
To do this in Windows, open the services panel and check the state of the service. If the PostgreSQL service is not running or cannot start, then you need to fix this before you begin the installation process.
You can use the Windows Event Log to determine the root cause. The log folder in the PostgreSQL data folder has log files for each day which might contain useful information.
Selecting the correct PostgreSQL version during installation
If the installer detects multiple versions of PostgreSQL in the HKLM\System\CurrentControlSet\Services registry entry, then it will prompt the user to select the one used for GigaFlow.
If the wrong version is selected, then the installer fails with an error message since it will try to initialize PostgreSQL data but the folder is not empty. In this case abort the installation and retry.
Data migration on Windows
The GigaFlow installer will perform the data migration as part of the installation process. If the data migration fails, then refer to the <GigaFlow_InstallDir>\pgdm.log file for the last successful step.
Check if any of the following cases apply to your failure scenario:
HKLM SYSTEM\CurrentControlSet\Services\postgresql-x64-11 "ImagePath"
HKLM SYSTEM\CurrentControlSet\Services\postgresql-x64-16 "ImagePath"
Once the issue is resolved you must run the data migration from the Command Prompt in Administrator mode as follows:
After the data migration process is successfully completed, run the following commands:
If the installation or the data migration process fails, then check the logs in the /tmp/gigaflow folder. The gigaflow.log file contains the installation logs.
If the data migration fails, then refer to the log files from the pg_upgrade_output.d folder present in the /opt/ros/pgsql/data folder.
The /tmp/gigaflow/ folder will have a file named pg_upgrade_check.sh which has the bin and data folder paths of the old and new PostgreSQL. Make sure that the paths are correct.
After the issue is fixed, run the following commands to restrat the data migration process:
To find out more about a particular IP address:
Click on Events tab in the panel on the left side of the screen.
The infographics and tables display all the security events associated with that IP address during the reporting period.
See Search and Events in the Reference Manual for more.
See Reports > System Wide Reports > Device Connections and Reports > System Wide Reports > MAC Address Vendors in the Reference Manual to find out get a list of MAC addresses associated with your network.
See Search in the Reference Manual for more.
Search for a MAC address to see what device it is connected to and the VLAN it is in:
For example, searching for the MAC address c0:4a:00:2c:d4:06 on our demo network:
Expanding the MAC panel displays information:
Selecting ARPs from the links in Tools brings up the relevant ARP entries. From here, you can click an interface to access more information about the physical interface the device is connected to. The interface with the lowest MAC count is the connected interface.
Blue boxed text indicates any piece of information that allows you to conduct a follow-on search:
The Treeview, or Graphical Flow Map, will automatically load. This is a visual representation of the in- and out- traffic associated with a selected device.
To explore the Graphical Flow Map:
To search for a specific network switch:
To make a follow-on search from a specific device or switch:
Using GigaFlow Search, enter a known IP address associated with switch or the switch name and search:
Select a site, device or remote branch directly from the Dashboard > Summary Device and Dashboard > Interface Summary pages. Use GigaFlow Search when the site, device or remote branch is not listed.
After searching by IP address, click on the Device Name in the panel on the left. The information in the panel on the right-hand side of the screen will refresh. Click Overview in the Tools section of this panel.
Figure: Sample results from GigaFlow's search for an internal IP address
In the Summary Interfaces panel, click on any interface to get overview information.
If you are interested in inbound - or download - traffic, select the Drill Down icon beside the interface name from the Summary Interfaces In panel.
A list of the application traffic, ranked by volume, will be displayed here; this Application Flows report will include details of the internal IP address communicating with this application.
You can view historical flow data by selecting any date or time.
Select a site, device or remote branch directly from the Dashboard > Summary Device and Dashboard > Interface Summary pages. Search using the main GigaFlow Search when the site, device or remote branch is not listed.
After searching by IP address, click on the Device Name in the panel on the left. The information in the panel on the right-hand side of the screen will refresh. Click Overview in the Tools section of this panel.
Figure: Sample results from GigaFlow's search for an internal IP address
In the Summary Interfaces panel, click on any interface to get an overview.
If you are interested in inbound - or download - traffic, select the Drill Down icon beside the interface name from the Summary Interfaces In panel.
A list of the application traffic, ranked by volume, will be displayed here; this Application Flows report will include details of the internal IP address communicating with this application.
You can view historical flow data by selecting any date or time.
From the Report drop-down menu at the top of the screen, select Application Flows with Users and click submit .
To see all traffic for a particular user, click the Drill Down icon to the left of that user. Click submit Forensics check and submit icon. once more.
First Packet Response (FPR) is a useful diagnostic tool, allowing you to compare the difference between the first packet time-stamp of a request flow and the first packet time-stamp of the corresponding response flow from a server. By comparing the FPR of a transaction with historical data, you can troubleshoot unusual application performance.
Application performance can be affected by many types of transaction: credit card transactions, DNS transactions, loyalty club reward transactions or any type of hosted transaction.
Ensure that First Packet Response (FPR) is enabled for each application. See Configuration > Server Subnets in the Reference Manual.
Select a site, a device or a remote branch directly from the Dashboard > Summary Device and Dashboard > Interface Summary pages. Access the Forensics report page by clicking the Drill Down icon in the Summary Devices table.
Search using the main GigaFlow Search when the site, device or remote branch is not listed. After searching by IP address, click on the Device Name in the panel on the left. The information in the panel on the right-hand side of the screen will refresh. Click Forensics in the Tools section of this panel.
Figure: Sample results from GigaFlow's search for an internal IP address
The panel on the right is displayed when the IP address is selected in the left-hand side panel.
Whether you arrive via the Dashboard or using GigaFlow Search, the Forensics report page will open with an Application Flows report.
Select Applications from the drop-down Report menu. Click the Submit icon to refresh the Forensics report page.
For demonstration purposes, we are looking for credit card related application data not displayed in the most used applications graph. To any application not listed in the top applications, change the view from graph to table.
Select 100 entries.
After identifying the application associated with credit card transactions, click the Forensics Drill Down icon to the left of the application and choose Selected from the hover option list. Click Submit Forensics check and submit icon. again.
Select the All Fields Max FPR (First Packet Response) report type from the drop-down menu. Change the view from table to graph and click Submit .
GigaFlow will return the response times for credit card applications in the reporting period. By clicking on the graph or selecting from the calendar, you can view response times over the period. The application response times, along the y-axis, are given in milliseconds.
Click Events in the main menu. GigaFlow will display information about the network events during the reporting period, including:
Figure: Events Graph
Figure: Event Categories infographic
Figure: Confidence & Severity infographic
Confidence & Severity Infographic.
By clicking once on any legend item, you will be taken to a detailed report, e.g. detailed reports for each Event Type, Source Host, Infrastructure Device, Event Category and Target Host.
Identify any internal IP address from your network that appears on the Event Target Host(s) list. If an internal IP address appears on the list, the associated device is infected and communicating with a known threat on the internet. Click on the IP address to find out who it is communicating with.
See System > Event Scripts and the GigaFlow Wiki for more.
Sites are subnet and IP range aliases. Using sites can help you to identify problems within logical groupings of IP address, e.g. by site or subnet. Once a site is defined, GigaFlow will begin to record information about it.
To define a new site, click Configuration > Sites. Existing Sites are displayed in the main table. To add a new Site:
To bulk-add new sites, i.e. more than one group at a time:
name,description,startip,endip
To monitor defined sites, go to Profiling > Sites.
Sometimes, it can be useful to define a site by subnet and by infrastructure device. For example, a corporate network could have a subnet served by more than one router. To create a granular view of flow through each router, a separate site could be created for each router, defined by the IP range of the subnet as well as the device name. Type in the filter box to quickly identify a device.
To edit a site definition, click Configuration > Sites and click on the site name or on the adjacent Drill Down icon . This will bring up a new page for that site where you can edit the group definition.
Profiling is a very powerful feature. A flow object is a logical set of defined flows and IP source and destination addresses. Flow objects make it possible to build up a complex profile or profiler quickly. These terms are used interchangeably. A particular network behaviour that involves network flow objects is described by a Profiler. Profiling allows you to congigure both the profiles of the flows to be tested, the entry flows, and the profiles of the allowed flows, i.e. the templates against which the entry flows are tested.
To get going, create your first profile. See Configuration > Profiling in the GigaFlow Reference Manual.
Step one is then to create a new flow object. To create a flow object:
You can add ANDd existing profile(s) to the new flow object, i.e. the new definitions are added to the existing profile(s).
You can also select alternative existing profile(s) that this new profile also maps to (ORd), i.e. the new flow object uses either the new definitions or the existing flow object definitions.
To ANDd or ORd profile(s), use the drop-down menu in the flow object definitions and click the +.
As an example, a printer installation at a particular location connected to a particular router can be defined by a flow object that consists of:
Or
A second flow object can be defined that will have its flow checked against the Allowed profile; this is an Entry profile.
To create a new Profiler (this term is used interchangeably with profile):
Select the Apps/Options tab.
GigaFlow comes loaded with a standard set of application port and protocol definitions. Flow records are associated with application names if there is a match.
Users can define their own application names within the software and have that application ID (Appid) available within the flow record. There are 3 techniques used, applied in order:
Select the Apps/Options tab.
To create a new Defined Application:
The Realtime Profiling Status at Profiling > Realtime Overview shows realtime data on defined Profilers. The display updates every 5 seconds and shows:
The Profiling Event Dashboard at Profiling > Profiling Events gives an overview of profile events.
At the top of the page you can set both the reporting period and resolution, from one minute to 4 weeks.
The Profiles infographic shows a timeline of the number of events with the profile(s) involved. Circle diameters represent the number of events. The peak number of events in the timeline for each profile are highlighted in red.
The Severities infographic shows a timeline of the number of events along with their estimated severity level(s). Circle diameters represent the number of events. The peak number of events in the timeline for each severity level are highlighted in red.
The Event Entries table gives a breakdown of profile events with the following entries:
A drop-down selector lets you choose the number of the most events to display.
To access a detailed overview of any flow, click on the adjacent Drill Down icon . This provides a complete overview of that flow, listing:
In GigaFlow, Appid is a positive or negative integer value. The way in which the Appid is generated depends on which of the 3 ways the application is defined within the system. Following the hierarchy outlined in Configuration > Profiling -- Apps/Options -- Defined Applications, a negative unique integer value is assigned if (1) the application is associated with a Profile Object or (2) if it is named in the system. If the application is given by its port number only (3), a unique positive integer value is generated that is a function of the lowest port number and the IP protocol.
See Glossary for more about flow record fields used by GigaFlow.
See also Search for instructions to access the Graphical Flow Mapping feature.
By clicking any Category, Severity, Source IP or Target IP in the Event Entries table, you will be taken to a version of the Events Dashboard filtered for that item.
See Dashboards > Events for an overview of the structure of this page.
An integration allows you to call defined external web pages or scripts directly from the Device Interface Overview page. To add an integration, navigate to System > Global. In the Integrations settings box, you can add or change an integration.
There are two types of integration:
To add an integration:
Example 1
In the code below, the populated field tells the software that you want to populate the device field with the IP address and the ifindex field with the ifindex. These fields (device and ifindex) will be passed to the target. We also have required fields, which are fields which the user is required to populate.
{ 'populated':{ 'device':'flow_device', 'ifindex':'flow_ifindex' }, 'required':[ {'name':'user','display':,'type':'text','value':}, {'name':'password','display':'Password','type':'password'}, {'name':'macro','display':'What Macro?','type':'select','data':['macro1','macro2','macro3','macro4']} ] }
Example 2:
var ProcessBuilder = Java.type('java.lang.ProcessBuilder'); var BufferedReader = Java.type('java.io.BufferedReader'); var InputStreamReader= Java.type('java.io.InputStreamReader'); output.append(data); output.append("Device IP:"+data.get("device")+" "); output.append("IFIndex:"+data.get("ifindex")+" "); try { // Use a ProcessBuilder //var pb = new ProcessBuilder("ls","-lrt","/"); //linux var pb = new ProcessBuilder("cmd.exe", "/C", "dir"); //windows output.append("Command Run"); var p = pb.start(); var is = p.getInputStream(); var br = new BufferedReader(new InputStreamReader(is)); var line = null; while ((line = br.readLine()) != null) { output.append(line+""); } var r = p.waitFor(); // Let the process finish. if (r == 0) { // No error // run cmd2. } output.append("All Done"); } catch ( e) { output.append(e.printStackTrace()); } log.warn("end")
See System > Global in the Reference Manual for more.
See System > Alerting in the Reference Manual to enable email alerts.
You will receive a GigaFlow alert email when a device in your network makes a connection to a blacklisted IP address. The embedded link will direct you to an Events page that summarises interactions with the blacklisted IP address during the reporting period.
See Determine if Bad Traffic is Affecting Your Network for more.
If you receive intelligence about a specific IP Address, MAC Address, network device or user, carry out a GigaFlow Search on the object. For this example, we will search by IP address.
After searching by IP address, click By Either in the panel on the left.
This will bring you to an Events page that summarises interactions with the IP address during the reporting period. Using this information, you can build a picture of the importance of the event.
See Determine if Bad Traffic is Affecting Your Network for more.
To view historical information, select the relevant dates and times at the top of the page.
See Reports > System Wide Reports > SYN Forensics Monitoring in the Reference Manual.
GigaFlow monitors all TCP flows where only the SYN bit is set. In normal network operations, this indicates that a flow has not seen a reply packet while active in a router's Netflow cache.
A lonely SYN can be an indicator that:
To view objects that are behaving anomalously, navigate to Reports > System Wide Reports > SYN Forensics Monitoring
You will see a summary of all the internal sources listed in order of the number of destination objects associated with each internal source.
Click the Drill Down icon for more information about each IP address.
The Trace extraction functionality allows you to pull a packet capture out of a specified GigaStor as a .pcap file, which you can download on your machine.
To create a trace extraction, you need to access the Forensics page corresponding to the device that you want. You will find there the Trace Extraction icon that provides access to the trace extraction parameters.
Apply filters to the selected device before creating the packet capture. This narrows the scope (and disk space required) of the packet capture. For more information, see the filtering syntax documentation.
Each trace extraction request is considered a job, and a packet capture is the result of each successful job. Every trace extraction job has its own progress including a start and end. You can make multiple, concurrent trace extraction jobs that are queued in the order they were received. The next trace extraction job in queue begins after the first job completes, either successfully or by failure. To maintain optimal disk performance on the GigaStor system, only one job is active at any given time on each GigaStor.
The list of created packet capture files is available from Reports > Trace Extraction Jobs.
You can configure how Trace Extraction is used in GigaFlow from System > Global > GigaStor.
To create a Trace Extraction:
To view your trace extraction jobs, check their status and download the generated packet capture files, go to Reports > Trace Extraction Jobs.
See also System > GigaFlow Cluster.
A single GigaFlow server can be configured to search for IP addresses across many remote GigaFlow servers directly from Viavi's Apex system. This feature is useful for large organisations that may have many GigaFlow servers monitoring different networks within the organisation, e.g. in different regions. The central administrator may want a view across the entire network, e.g. to determine if a particular suspect IP address has been recorded by routers on different networks.
In this example, assume that you, as the main administrator, want visibility on several remote GigaFlow servers.
The set-up is:
Figure: Defining a GigaFlow cluster
Log in to GigaFlow Server #0, the Pitcher, and navigate to System > GigaFlow Cluster.
In the This Server panel, you will see a pre-generated unique secret. Leave this as is.
In another browser tab or window, log into Receiver 1 (GigaFlow Server #1). Copy the unique secret from Receiver 1's This Server panel. You do not need to do anything with the New Cluster Server panel on the receivers.
Figure: This Server panel
Switch back to the Pitcher (GigaFlow Server #0). In the New Cluster Server panel, perform the following steps:
NOTE: This IP address is used by the Pitcher to generate a secure hashed key for communication. The receiver reverses this hash using, among other things, the IP address of the Pitcher. An intermediate firewall (NAT) could create problems if the Pitcher does not create the hashed key using the IP address seen by the receiver.
NOTE: This user must exist on Receiver 1. If it does not exist, then switch to Receiver 1 and create a new normal user on Receiver 1 (for example, reportuser).
RESULT: The Pitcher connects to Receiver 1 using the Admin user to verify that everything is correct and to populate the table in the Cluster Access frame, on the System - GigaFlow Cluster page. Receiver 1 shows in the main table.
Figure: New Cluster Server panel
NOTE: The cluster server feature is flexible, this means that a receiver in one cluster can be a pitcher for another.
Figure: Conducting a GigaFlow Cluster search
Following the search link from Apex, you will be brought to a new tab and the log in screen for the Pitcher machine. After logging in, you will be brought to the GigaFlow Cluster report page.
Figure: The initial view of the GigaFlow Cluster report page
This displays a list of hits for this IP address across the cluster; in this example, the IP address 172.21.21.21 was found on 11 devices monitored by three receivers. On these receivers the system found 9 devices with data matching the search and there were no errors.
In the first first table, each GigaFlow server is listed with:
Figure: Clicking on the drill down icon beside a result brings up the full user interface and a forensics report for that device on the associated GigaFlow server
The system allows ten minutes between running the report and viewing these results without re-authentication.
You can also select different report types to run on that device on that GigaFlow server by selecting from the drop-down menu. See Reports > Forensics in the main Reference Manual for more.
Communication between all clients in a Gigaflow cluster is IP to IP, i.e. unicast. The traffic is routed over https, using TLS based on certificates.
We test against the latest Mozilla Firefox, MicroSoft Edge and Google Chrome browsers.
See here for a useful server size calculator.
GigaFlow has been tested and certified to support up to 1,000 concurrent devices or up to 40,000 flows per second (flow/s) from less than 20 devices.
The flow rate changes with the number of connected devices as follows:
Flow/s | Number of Devices |
---|---|
50,000 | 10 |
40,000 | 20 |
20,000 | 40 |
10,000 | 80 |
5,000 | 160 |
2,500 | 300 |
1,250 | 600 |
1,000 | 1,000 |
Allow for at least 600 bytes per flow record per second for I/O throughput, i.e.
Flow/s | Bytes/s (Sustained Write Performance) | MB/s (Sustained Write Performance) |
---|---|---|
100 | 60,000 | 0.06 |
2,000 | 1,200,000 | 1.2 |
10,000 | 6,000,000 | 6.0 |
40,000 | 24,000,000 | 24.0 |
With
f = flow/s
d = number of devices
I = Input/Output performance measurement (IOP), nominally sustained sequential writing.
I = 20 + (f / 500) + (d / 5)
i.e. allow for a base of 20 IOPs, add an additional 1 IOP/s for every 500 flow/s and another 1 IOP/s for every 5 devices.
Flow/s | Number of Devices | IOPs |
---|---|---|
1,000 | 1,000 | 222 |
5,000 | 1,000 | 230 |
10,000 | 100 | 60 |
40,000 | 10 | 102 |
Allow for at least 100 IOPs read.
The server must support at least 300 MB/s sustained read and write to handle the peak device or flow count. Anything less than this will result in dropped flows. For Linux, we recommend EXT4 or XFS file systems as well a dedicated RAID partition for the database. Adding a hardware RAID controller that supports RAID 10, or at least RAID 5, will improve performance and provide hardware redundancy. The amount of storage required is directly related to the flow rate and features enabled.
Data Type | Minimum Space Per Record (Bytes) |
---|---|
Forensics Flow | 250 |
Event Record | 900 |
500 flow/s of forensics == 450 MB per hour == 11 GB disk space per day.
A basic installation should have 4 GB RAM available for the OS and additional 50 MB per device to monitor. More RAM will always improve performance:
Number of Devices | Minimum RAM (GB) |
---|---|
10 | 4.5 |
100 | 9 |
500 | 29 |
1,000 | 54 |
CPU sizing in GigaFlow is based on the Postgre SQL database. Overall performance is also dependent on CPU performance.
While there is little to gain by going beyond 8 cores, more powerful CPUs will provide a better experience. Intel's Xeon X5680 3GHz or Core i7-3770S 3GHz are recommended as a minimum required specification.
As a demonstration, GigaFlow was installed on a typical server with the following specifications:
The results of a performance test were as follows:
Devices | Flows | Total Flows | CPU Idle | Disk Write | IO Writes | Disk Utilisation | Notes |
- | per device s-1 | - | % | MB s-1 | s-1 | % | - |
10 | 15,000 | 15,0000 | 80 | 88 | 250 | 9 | |
50 | 3,000 | 150,000 | 78 | 91 | 260 | 11 | |
100 | 1,500 | 150,000 | 78 | 91 | 261 | 11 | At limit of flow cache before flows are dropped. |
250 | 400 | 100,000 | 85 | 62 | 142 | 5 | Must double RAM used by GigaFlow to 1,536 MB. |
500 | 200 | 100,000 | 86 | 58 | 190 | 10 | Must double RAM used by GigaFlow to 1,536 MB. |
1,000 | 100 | 100,000 | 85 | 59 | 232 | 10 | Must double RAM used by GigaFlow to 1,536 MB. |
2,000 | 50 | 100,000 | 82 | 59 | 220 | 10 | Must double RAM used by GigaFlow to 1,536 MB. |
These results show that this relatively mid-specification machine can cope with 50 devices at 150K flows per second. The same system can handle 2,000 devices with a cumulative count of 100K flows/s.
We recommend a maximum of 1,000 devices per GigaFlow server. Above this, database query performance will degrade.
Yes, there is a REST endpoint for all report data with a portal user definitions to control access. You can open your GigaFlow system for integration with third party applications.
For more information, see API articles at the official GigaFlow Wiki.
No, your GigaFlow system is accessed via a HTML/Javascript front-end using your preferred browser. Output is rendered as HTML, .csv or .pdf.
Figure: GigaFlow's login screen
Log in to GigaFlow from any browser using your credentials. GigaFlow will open on the Dashboards welcome screen.
The left menu contains links to the main outputs from GigaFlow.
Figure: Detail of the left main menu
The top menu contains links to system configuration, system settings and user settings.
Figure: Detail of the top main menu
Most items in GigaFlow will have a white information button on a gray background at the top right. Click this to link out to the relevant section in the GigaFlow Reference Manual.
You can refresh any page in GigaFlow using the browser refresh button.
By clicking, holding and dragging left or right, you can create a report for any shorter time period.
Figure: Click and drag to create a report for a particular timeframe within a timeline
The user selects a shorter time period of interest by clicking and dragging, in this case from just before 11:00 to just before 11:15.
Figure: Re-scaled timeline
The timeline is re-drawn for the selected time period. All associated information on this page is recalculated, i.e. tables and graphs.
Throughout GigaFlow, all hyperlinks are colored blue.
The system search is at the top of every GigaFlow page; it is a powerful and convenient way to access information directly, returning relevant matches and detailed summary information.
The left hand side of the search results screen displays summary information about that IP address, including:
The Treeview, a kind of graphical flow mapping, will automatically load on the right hand side of the search results screen. This is a visual representation of the in- and out- traffic associated with a selected device.
See Search > Graphical Flow Mapping for more.
Figure: GigaFlow's search bar and results screen. In this screenshot, the user is searching for an IP address, 172.21.21.254.
Scrolling down reveals additional results:
Click to expand. The tabbed box on the left displays search results for that IP address, including any infrastructure device that it is associated with, the number of interfaces it was recently seen on, IP entry details, ARP entries and the number of secflow events associated with it.
Each item can be clicked to display more information and follow-on searches can be carried out for linked information, e.g. for associated MAC addresses.
GigaFlow can search by MAC address. This returns the name of the connected device and its VLAN.
To search by MAC address:
After searching the MAC address a number of key pieces of information will be displayed, including:
From here, some of the other actions you can take include:
To search by username:
To search for a specific network switch:
To make a follow-on search from a specific device or switch:
GigaFlow Search provides access to the Graphical Flow Mapping feature. Searching for any IP address returns summary tables as well as a visualisation of flows during the reporting period selected.
To access this feature, search for an IP address. See Searching by IP Address above.
On the right hand side of the screen, you will see the Graphical Flow Map.
Figure: GigaFlow's Graphical Flow Map
The Graphical Flow Map is an interactive visualisation of the flows associated with that IP address. The branches of the Graphical Flow Map can be expanded to show associated interfaces and destination and source devices. These in turn can be explored.
To explore the Graphical Flow Map:
See also Profiling > Events > Flow Details
After logging in, you will see the Server Overview Dashboard. From here you have direct access to some of GigaFlow's main functions.
Figure: GigaFlow's main dashboard
Located at Dashboards > Enterprise Dashboard.
This is a summary of traffic across the busiest infrastructure devices over the past hour. Infrastructure devices are routers sending flow as well as other Layer-2 devices that are not sending flow but sending ARP and CAM Tables. The information displayed includes:
When more than ten devices are registered, the full list can be displayed by clicking the drill down icon. This displays a tabular version of all the data, with a CSV export option.
This is a summary of traffic across the busiest interfaces over the past hour. The information displayed includes:
When more than ten interfaces are registered, the full list can be displayed by clicking the drill down icon. This displays a tabular version of all the data, with a CSV export option.
See Configuration > Infrastructure Devices for setup instructions.
This graph shows the traffic associated with the busiest sites over the past hour. The information displayed includes:
When more than ten sites are registered, the full list can be displayed by clicking the drill down icon. This displays a tabular version of all the data, with a CSV export option.
This graph shows the traffic associated with the busiest applications over the past hour. The information displayed includes:
When more than ten applications are registered, the full list can be displayed by clicking the drill down icon. This displays a tabular version of all the data, with a CSV export option.
This graph shows the traffic associated with the busiest source IPs in the past hour. The information displayed includes:
When more than 10 IP addresses are registered, the full list can be displayed by clicking the drill down icon. This displays a tabular version of all the data, with a CSV export option.
This graph shows the traffic associated with the busiest destination IPs in the past hour. The information displayed includes:
When more than 10 IP addresses are registered, the full list can be displayed by clicking the drill down icon. This displays a tabular version of all the data, with a CSV export option.
Located at Dashboards > Server Overview.
Figure: Detail of the left main menu and Dashboards submenu.
This default welcome screen shows summary server flow information with three main sections:
This is a summary of traffic across all of your infrastructure devices ranked by the busiest network device. Infrastructure devices are routers sending flow as well as other Layer-2 devices that are not sending flow but sending ARP and CAM Tables. The information displayed includes:
The bars on the graph indicate the volume of data seen over the reporting period, by default 2 hours.
This is a graph and table of interface bit rates, with a breakdown by individual interface. This is a summary of the busiest interfaces across your network.
See Configuration > Infrastructure Devices for setup instructions.
When more than 10 devices are registered, the full list can be displayed by clicking List All Drill down icon. beside the table title.
The CSV link at the top right of the table generates a .csv export.
This middle graph shows all events and alerts across your network during the reporting period.
Figure: Detail of GigaFlow's report period selection panel, at the top of most pages
Figure: Clicking in either the From or To field brings up a date and time selector.
Click once on any infrastructure device IP address or interface in the tables and you will be taken to an overview report for that device or interface. You can access the same overview using GigaFlow's search function. Search for the IP address and in the main left-hand side table, click the Infrastructure Device name. In the right-hand side table, click Overview.
See Dashboards > Device Overview and Dashboards > Interface Overview.
Located at Dashboards > Device Overview; there is also a unique Device Overview subpage for each infrastructure device.
From the Dashboards menu, the Device Overview option displays:
This information is also displayed in Performance Overview and in Server Overview. See Dashboards > Performance Overview > Top Devices (Last Hour v Last Week Hour) and Dashboards > Server Overview.
Device Overview Subpages
Click once on any infrastructure device IP address or interface in the tables and you will be taken to an overview report for that device or interface. You can access the same overview using GigaFlow's search function. Search for the IP address and in the main left-hand side table, click the Infrastructure Device name. In the right-hand side table, click Overview.
The overview for each infrastructure device includes graphs and tables with useful information. These include:
This panel lists the most important device information, including:
You can add an Attribute, or alias, for network infrastructure by selecting from the interface Attributes drop down selector. See Configuration > Attributes for more on attributes. This panel lists attributes and useful tools associated with the selected device, including:
Attributes. See Configuration > Infrastructure Devices and click on any device for more. Links out to useful tools, including Forensics, ARPs, CAMs, Live Interfaces and Traffic Overview. Links to associated integrations. See System > Global for more about integrations.
Figure: Visual from Device Overview subpage
The information is presented as a pie-chart. Each application can be queried by clicking the drill down icon for more.
This graph shows the top 10 ports/applications associated with this device in the past hour.
This graph shows the top 10 site pairs associated with this device in the past hour.
This graph shows the top 10 source IPs associated with this device in the past hour.
This graph shows the top 10 destination IPs associated with this device in the past hour.
This graph shows the summary traffic volume information for this device (MB/s).
This graph shows the summary interface traffic volume information for this device (MB/s).
A timeline of threat events associated with the device in the report period.
A table of all the interfaces associated with this device is displayed at the bottom of the page; the information presented includes:
Located at Dashboards > Interface Overview; there is also a unique Interface Overview subpage for each interface.
From the Dashboards menu, the Interface Overview option displays high-level summary information; some of this is also displayed in Server Overview. See Dashboards > Server Overview. The three graphs displayed are:
Figure: GigaFlow's Interface Overview page
Click once on any infrastructure device IP address or interface in the tables and you will be taken to an overview report for that device or interface. You can access the same overview using GigaFlow's search function. Search for the IP address and in the main left-hand side table, click the Infrastructure Device name. In the right-hand side table, click Overview.
Dedicated Interface Overview Pages
At the top of the page, you can see:
This panel displays the interface details:
You can edit these at Configuration > Infrastructure Devices.
You can add an Attribute, or alias, for an interface by selecting from the interface Attributes drop down selector. See Configuration > Attributes for more. This panel lists attributes and useful tools associated with the selected interface, including:
In addition, you will see graphs and tables of:
This is a summary of the total traffic for that interface.
This is a summary of the total inward traffic for that interface.
This is a summary of the total outward traffic for that interface.
This is a summary of the total inward packets for that interface.
This is a summary of the total outward packets for that interface.
A summary of the total inward flows for that interface.
This is a summary of the total outward flows for that interface.
DSCP (differentiated services code point) in summary information used for ingress policing configuration.
DSCP (differentiated services code point) out summary information used for ingress policing configuration.
DSCP (differentiated services code point) in summary information used for egress policing configuration.
DSCP (differentiated services code point) out summary information used for egress policing configuration.
Application traffic overview.
This graph shows the traffic associated with the busiest applications over the past hour. The information displayed includes:
When more than ten applications are registered, the full list can be displayed by clicking the drill down icon. This displays a tabular version of all the data, with a CSV export option.
This graph shows the top 10 source IPs over the past hour; this is compared with the same hour a week before.
This graph shows the top 10 destination IPs over the past hour; this is compared with the same hour a week before.
This is a graph and table of site bit rates, with a breakdown by individual site. This is a summary of the busiest sites across your network.
See Configuration > Sites for setup instructions.
When more than 10 sites are registered, the full list can be displayed by clicking List All Drill down icon. beside the table title.
The CSV link at the top right of the table generates a .csv export.
This section provides an overview of the traffic associated with source IPs.
This graph shows the traffic associated with the busiest applications over the past hour. The information displayed includes:
When more than ten applications are registered, the full list can be displayed by clicking the drill down icon. This displays a tabular version of all the data, with a CSV export option.
This graph shows the top 10 destination IPs over the past hour.
This graph shows the top 10 destination IPs over the past hour.
This is a graph and table of traffic source IP bit rates, with a breakdown by individual site. This is a summary of the busiest traffic source IPs across your network.
When more than 10 traffic source IPs are registered, the full list can be displayed by clicking List All Drill down icon. beside the table title.
The CSV link at the top right of the table generates a .csv export.
This section provides an overview of the traffic associated with destination IPs.
This graph shows the traffic associated with the busiest applications over the past hour. The information displayed includes:
When more than ten applications are registered, the full list can be displayed by clicking the drill down icon. This displays a tabular version of all the data, with a CSV export option.
This graph shows the top 10 source IPs over the past hour.
This graph shows the top 10 destination IPs over the past hour.
This is a graph and table of traffic destination IP bit rates, with a breakdown by individual site. This is a summary of the busiest traffic destination IPs across your network.
When more than 10 traffic destination IPs are registered, the full list can be displayed by clicking List All Drill down icon. beside the table title.
The CSV link at the top right of the table generates a .csv export.
Located at Dashboards > Sites Overview; there is also a unique Sites Overview subpage for each site.
From the Dashboards menu, the Sites option displays summary information.
Sites Subpages
Click once on any Sites name in the table and you will be taken to an overview report for that site.
Click once on any of the "Down Arrows" beside the site name in the table and you will be taken to the forensics report for that site.
The overview for each site includes graphs and tables with useful information (similar to device overviews). These include:
This graph shows the top 10 destination IPs over the past hour.
This graph shows the top 10 destination IPs over the past hour.
This graph shows the top 10 source IPs over the past hour.
This graph shows the top 10 destination IPs over the past hour.
This graph shows the source traffic associated with sites over the past hour. The information displayed includes:
When more than ten sites are registered, the full list can be displayed by clicking the drill down icon. This displays a tabular version of all the data, with a CSV export option.
This graph shows the destination traffic associated with sites over the past hour. The information displayed includes:
When more than ten site are registered, the full list can be displayed by clicking the drill down icon. This displays a tabular version of all the data, with a CSV export option.
Located at Dashboards > Events.
From the Dashboards menu, the Events option displays summary information about events and exceptions. See also Dashboards > Server Overview. You can also click on Events item in the main menu to access the same information.
Some things that will trigger an event record include:
On the Events page, you can see:
A timeline of all events in the reporting period, the Events Graph. A tabulated version of this information is shown underneath.
Figure: Events Graph
Figure: Event Categories infographic
Figure: Confidence & Severity infographic
By clicking once on any legend item, you will be taken to a detailed report, e.g. detailed reports for each Event Type, Source Host, Infrastructure Device, Event Category and Target Host.
To view a world map overlaid with attack/event trajectories, click on the Threat Map main menu item.
Figure: The main threat map visualization
Alongside the map, you will see the expandable tables:
Most items in the side tables can be examined in isolation, i.e. Event IP Sources, Event Types, Event Categories, Event Devices, Event IP Destinations.
Click on the relevant IP and the page will display information related only to that IP. The main table below the map displays a complete summary of the the threats mapped with time. In addition to the information presented in the side tables, the main table includes:
See Dashboards > Events.
The Profiling main menu item has three options: Realtime Overview, Event Dashboard and Sites.
Profiles help you to understand the normal behaviour of your network. For more, and to create profiles, go to Configuration > Profiling.
Profiling > Realtime Overview
The Realtime Profiling Status shows realtime data on defined Profilers. The display updates every 5 seconds and shows:
Located at Profiling > Profiling Events.
The Profiling Event Dashboard gives an overview of profile events.
Figure: The Profiling Events page
At the top of the page you can set both the reporting period and resolution, from one minute to 4 weeks.
This infographic shows a timeline of the number of events with the profile(s) involved. Circle diameters represent the number of events. The peak number of events in the timeline for each profile are highlighted in red.
This infographic shows a timeline of the number of events along with their estimated severity level(s). Circle diameters represent the number of events. The peak number of events in the timeline for each severity level are highlighted in red.
Event Entries is dynamically generated; you must click Click To Load to populate the table. The Event Entries table gives a breakdown of profile events with the following entries:
A drop-down selector lets you choose the number of the most events to display.
To access a detailed overview of any flow, click on the adjacent Drill Down icon . This provides a complete overview of that flow, listing:
See Glossary for more about flow record fields used by GigaFlow.
See also Search for instructions to access the Graphical Flow Mapping feature.
By clicking any Category, Severity, Source IP or Target IP in the Event Entries table, you will be taken to a version of the Events Dashboard filtered for that item.
See Dashboards > Events for an overview of the structure of this page.
Located at Profiling > Sites.
Sites are subnet and IP range aliases.
This graph shows the top 10 destination IPs over the past hour.
This graph shows the top 10 destination IPs over the past hour.
This graph shows the top 10 source IPs over the past hour.
This graph shows the top 10 destination IPs over the past hour.
This graph shows the traffic associated with sites over the past hour. The information displayed includes:
When more than ten site are registered, the full list can be displayed by clicking the drill down icon. This displays a tabular version of all the data, with a CSV export option.
This graph shows the destination traffic associated with sites over the past hour. The information displayed includes:
When more than ten sites are registered, the full list can be displayed by clicking the drill down icon. This displays a tabular version of all the data, with a CSV export option.
Reports gives you access to system reports and logs. GigaFlow stores a record of all reports. Reports can be generated in many ways but most commonly by viewing a Forensics page; viewing a Forensics page automatically generates a report. When a report is generated by a user, it is cached by the system and can be accessed again almost immediately. In addition, all recorded reports can be re-run from scratch at any time. Runtimes vary with the scope of the report, i.e. reports that involve more data will take longer to complete. Typical reports for limited periods of time complete in seconds.
Administrators have access to all reports run by all users.
See Configuration > Reporting for more.
Located at Reports > My Current Queries.
My Current Queries lists your queries currently in process. The table includes:
The filters used to define the report.
Located at Reports > My Complete Queries.
My Complete Queries lists your completed queries. The table includes:
Located at Reports > All Complete Queries.
This option provides administrators with a view of all queries made by all users.
This option provides administrators with a view of all forensic queries made by all users.
Located at Reports > Canceled Queries.
Canceled Queries lists your canceled queries. The table includes:
Located at Reports > Cluster Search.
Figure: Conducting a GigaFlow Cluster search
[file: Conducting a GigaFlow Cluster search.]
Following the search link from Apex, you will be brought to a new tab and the log in screen for the Pitcher machine. After logging in, you will be brought to the GigaFlow Cluster report page.
Figure: The initial view of the GigaFlow Cluster report page
This displays a list of hits for this IP address across the cluster; in this example, the IP address 172.21.21.21 was found on 11 devices monitored by three receivers. On these receivers the system found 9 devices with data matching the search and there were no errors.
In the first first table, each GigaFlow server is listed with:
Figure: Clicking on the drill down icon beside a result brings up the full user interface and a forensics report for that device on the associated GigaFlow server
The system allows ten minutes between running the report and viewing these results without re-authentication.
You can also select different report types to run on that device on that GigaFlow server by selecting from the drop-down menu. See Reports > Forensics in the main Reference Manual for more.
Located at Reports > DB Queries.
DB Queries displays a list of the most recent database queries. The information returned includes:
Located at Reports > Forensics.
See Configuration > Reporting for instructions on how to configure and create new report types. See Appendix > Forensic Report Types for a complete description of the different report types.
Forensics allows you granular, filterable reporting on the stored records. GigaFlow records flow posture, i.e. whether or not a flow is flagged as an excepted event, allowing for detailed analysis.
To create a report:
A router - an infrastructure device - has an IP of 192.0.2.1. This router was defined at Configuration > Infrastructure Devices. Choose an Applications report from the report type drop-down menu. Choose Infrastructure Device from the filter drop-down menu. Choose the router from the list of devices and apply the filter by clicking +.
The system will return a graph and/or table with details of:
Queries can be entered directly and quickly using the direct filtering syntax.
Field | Operators | Description | Example |
---|---|---|---|
srcadd | =, != | IP Address that the traffic came from. | srcadd=172.21.40.2 |
dstadd | =, != | IP Address that the traffic went to. | dstadd=172.21.40.3 |
inif | <, =, >, != | ifIndex of the interface through which the traffic came into the router. | inif=23 |
outif | <, =, >, != | ifIndex of the interface through which the traffic left the router. | inif=25 |
pkts | <, =, >, != | Number of packets seen in the flow. | pkts>100 |
bytes | <, =, >, != | Number of bytes seen in the flow. | bytes<10000 |
duration | <, =, >, != | Duration of flow in milliseconds. | duration>100 |
srcport | <, =, >, != | Source IP port of flow. | srcport>1024 |
dstport | <, =, >, != | Destination IP port of flow. | dstport=5900 |
flags | <, =, >, != | TCP Flags of flow. flags=2 i.e. syn only. | Flags=CEUAPRSF |
proto | <, =, >, != | IP Protocol Number/Type. | proto=16 |
tos | <, =, >, != | TOS Marking. tos=104 | //Flash |
srcas | <, =, >, != | Source AS Number. | srcas!=5124 |
dstas | <, =, >, != | Destination AS Number. | dstas!=5124 |
fwextcode | =, != | Forwarding Extended Code. | fwextcode='out-of-memory' |
fwevent | =, != | Forwarding Event. | fwevent!='Flow Deleted' |
tgsource | =, <> | Site Source. | fwextcode<>'My network 1' |
tgdest | =, <> | Site Destination. | tgdest<>'Other' |
Located at Reports > First Packet Response.
First Packet Response (FPR) is a useful diagnostic tool, allowing you to compare the difference between the first packet time-stamp of a request flow and the first packet time-stamp of the corresponding response flow from a server. By comparing the FPR of a transaction with historical data, you can troubleshoot unusual application performance.
On loading, this page displays any First Packet Response information available for Monitored Servers.
Figure: The First Packet Response reports page
To set up a monitored server, go to Configuration > Server Subnets.
The displayed information includes:
On loading, this page displays any First Packet Response information available for Monitored Servers.
Figure: The First Packet Response reports page
To set up a monitored server, go to Configuration > Server Subnets.
The displayed information includes:
Click the Ticker check box next to a device to enable a ticker in the table below with live response times in milliseconds.
GigaFlow's Ip Viewer uses MapBox to visualise the physical location of IP addresses
The NAT Reporting dialog shows network translations that are color coded in order for you to easily identify which entries changed for each session. The colors indicate that the NAT process has changed that value. Each value type has a different color to make the changes obvious. For example, a value found in the dark-green color under the NAT SRCADD column represents the translated network address of the value found in the same row under the SRCADD column with the same color.
If you want the NAT report to be focused on a particular time period, then click the Pick period drop-down box in the upper-right corner of the dialog. Here you can customize the time period according to different criteria (for example, Last, This, To Date, or Trailing).
In the upper-left corner search boxes, you can perform the following types of searches:
If you want to remove/show the duplicate records, then click the Show/Hide Duplicates button.
If you click the Show Only Translated/Show All 1000 button, then only the records that were changed by the NAT process are shown or all the records respectively.
The lens icon related to the DEVICE column tries to find that same session on other devices on the GigaFlow server.
The drill-down arrow related to the DEVICE column redirect you to the Reports > Forensics dialog for that Session ID on that specific device.
The drill-down arrows related to the NAT SRCADD and NAT DSTADD columns redirect you to Reports > IP Viewer. The IP Viewer modal provides the ability to search for an IP address or user data across multiple GigaFlow servers and routers in an organization without having to know anything about that organizations network infrastructure. The IP Viewer modal displays the found data in a simple and accessible way to the end user, while providing filtering and drill-down capabilities right down to the raw forensics level.
You can also find the NAT devices in the Configuration > Infrastructure Devices dialog. Here you can see all the devices that GigaFlow is receiving NAT data for. If the Nat column is marked as true, then the related device is available in the NAT Reporting dialog.
A network audit report is a standard-format JSON object that contains a summary of all the devices registered by GigaFlow that belong to a particular subnet.
Network audit reports are enabled for a particular subnet at Reports > System Wide Reports > Subnet List. Enabling a network audit runs the network audit script for the selected subnet(s). You can view the network audit script at System > Event Scripts.
Server configuration happens at Configuration > Server Subnets.
Located at Reports > Saved Reports.
Newly generated reports of interest can be saved for reference. Click Save in the top right corner.
Once a report has been saved it can be viewed in two ways: (i) open. This opens without running. More filters can be added; (ii) open and run.
Saved Reports are listed with the following information:
Each automatically discovered server is listed here with:
Clicking on a device brings up detailed information for that server.
Details about each discovered server are shown here, with information about its behaviour as a source and destination for traffic. Two tables are shown on screen.
Located at Reports > System Wide Reports.
These are static reports produced by GigaFlow. From the Reports menu, the System Wide Reports option expands to display a further five options:
Located at Reports > System Wide Reports > SYN Forensics Monitoring.
GigaFlow monitors all TCP flows where only the SYN bit is set. In normal network operations, this indicates that a flow has not seen a reply packet while active in a router's Netflow cache.
A lonely SYN can be an indicator that:
GigaFlow creates an alert when:
On the SYN Forensics Monitoring page, you can find two tables of information:
Each table lists the following information:
See also System > Alerting
Located at Reports > System Wide Reports > Device Connections.
This page lists all known network connected devices. This list is used as a look-up for Layer-2 to Layer-3 connectivity.
The table lists:
Clicking the export icon beside the page title creates a .csv file for download or printing.
Located at Reports > System Wide Reports > MAC Address Vendors
This page shows information about devices associated with particular MAC vendors on your network; GigaFlow displays:
Click on Count to populate the second table, MAC Addresses. These are the individual devices associated with that vendor. For each device, the table shows:
The Subnet List report summarises all the subnets automatically discovered using SNMP. The details listed are:
This report summarises infrastructure devices by flow rate, showing a graph and table listing, for each device:
Located at Reports > SQL Report.
Select how many of the most recent SQL Queries to view from the drop-down menu, i.e. the most recent 10, 50, 100 or All.
The table lists the following information:
Located at Reports > Trace Extraction Jobs.
This report gives details on the trace extraction jobs that you have created, such as creation date, completion status, file size, start time, end time, applied filter.
You can take the following actions on the generated package capture files:
Located at Reports > User Events.
Select how many of the most recent User Events to view, i.e. the most recent 50, 100, 250 or 500 events.
This table gives information about:
Figure: The Configuration menu
GigaFlow comes loaded with a standard set of application port and protocol definitions. Flow records are associated with application names if there is a match.
Users can define their own application names within the software and have that application ID (Appid) available within the flow record. There are 3 techniques used, applied in order:
To add a defined service, perform the following steps:
Note: The Name field and at least one of the IP Addressing or IP Port(s) fields are mandatory. The Description and the Protocol fields are optional.
Flow Objects are defined by:
IP addresses, MAC addresses, Ports and Sites can also be defined as both Source/Destination.
You can add ANDd existing profile(s) to the new Flow Object, i.e. the new definitions are added to the existing profile(s).
You can also select alternative existing profile(s) that this new profile also maps to (ORd), i.e. the new Flow Object uses either the new definitions or the existing Flow Object definitions.
To ANDd or ORd profile(s), use the drop-down menu in the Flow Object definitions and click the +.
As an example, a printer installation at a particular location connected to a particular router can be defined by a Flow Object that consists of:
Or
A second Flow Object can be defined that will have its flow checked against the Allowed profile; this is an Entry profile.
To add a protocol or port application, click the Add protocol/port application icon and enter:
Click Save.
The Existing Defined Service table identifies the services based on the IP address, and/or Port, and/or Protocol. Once you add a defined service using the Add Defined Service button, it is included in the table.
The ID column shows the internal database ID used to identify the related service.
The Hits column shows the number of times this service has been seen in the flows.
The service Name is shown in your forensics applications reports.
In the Existing Defined Service table, you can perform the following actions:
To edit a defined service, click the Edit this object button in the related row, under the Actions column.
The EXISTING DEFINED SERVICE dialog shows.
To delete a defined service, click the Delete button in the related row, under the Actions column.
The CONFIRMATION dialog shows.
To search for an item inside the table, enter the desired string in the Search field.
You can perform a search using keywords from all the columns present in the Existing Defined Service table.
Already defined applications are listed here. Existing applications can be edited to associate icons.
Already defined flow objects are listed here.
Already defined protocol/port applications are listed here. Existing protocol/port applications can be edited to associate icons.
Located at Configuration > Attributes.
Attributes are aliases used to help with identification of network infrastructure and users. Assigning an attribute category to a MAC address, user, IP, device or interface allows different user groups easily tag and identify your network infrastructure. For example, a network engineer may prefer to alias a device with a name or category appropriate to their view of your network. The security team may prefer a different categorisation.
The categories are:
The aim is to facilitate rapid identification of network infrastructure and user groups.
If you want to add a new device attribute:
Configure GigaFlow to receive AWS VPC flow logs
The VPC Flow Log Ingestion function allows GigaFlow to treat AWS VPC flows in the same way as it handles the flow records. Hence, it provides the user the ability to report on on-premises and cloud based network traffic.
To enable the VPC Flow Log Ingestion function, perform the following steps:
Make sure that the following required fields at the minimum are included when you enable the VPC flow logs:
Field | Operators |
---|---|
type | The type of traffic.
Possible values:
|
accountid | The AWS account ID of the owner of the source network interface for which the traffic is recorded. |
region | The region that contains the network interface for which traffic is recorded. |
interface-id | The ID of the network interface for which the traffic is recorded. |
instance-id | If the instance is owned by you, then this represents the ID of the instance that is associated with the network interface for which the traffic is recorded.
It returns a '-' symbol for a requester-managed network interface (for example, the network interface for a NAT gateway). |
flow-direction | The direction of the flow with respect to the interface where traffic is captured.
Possible values: ingress or egress |
srcadd | The source address for incoming traffic, or the IPv4 or IPv6 address of the network interface for outgoing traffic on the network interface. The IPv4 address of the network interface is always its private IPv4 address. |
dstadd | The destination address for outgoing traffic, or the IPv4 or IPv6 address of the network interface for incoming traffic on the network interface. The IPv4 address of the network interface is always its private IPv4 address. |
pkt-srcaddr | The packet-level (original) source IP address of the traffic. Use this field with the srcaddr field to distinguish between the IP address of an intermediate layer through which traffic flows,
and the original source IP address of the traffic. For example:
|
pkt-dstaddr | The packet-level (original) destination IP address for the traffic. Use this field with the dstaddr field to distinguish between the IP address of an intermediate layer through which traffic flows,
and the final destination IP address of the traffic. For example:
|
srcport | The source port of the traffic. |
dstport | The destination port of the traffic. |
traffic-path | The path that egress traffic takes to the destination. To determine whether the traffic is egress traffic, check the flow-direction field.
Possible values:
|
flags | The bitmask value for the following TCP flags:
TCP flags can be OR-ed during the aggregation interval. For short connections, the flags might be set on the same line in the flow log record (for example, 19 for SYN-ACK and FIN, and 3 for SYN and FIN). See https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs-records-examples.html#flow-log-example-tcp-flag for an example. |
protocol | The IANA protocol number of the traffic. For more information, see Assigned Internet Protocol Numbers. |
packets | The number of packets transferred during the flow. |
bytes | The number of bytes transferred during the flow. |
action | The action that is associated with the traffic:
|
start | The time, in Unix seconds, when the first packet of the flow was received within the aggregation interval. This might be up to 60 seconds after the packet was transmitted or received on the network interface. |
end | The time, in Unix seconds, when the last packet of the flow was received within the aggregation interval. This might be up to 60 seconds after the packet was transmitted or received on the network interface. |
vpc-id | The ID of the VPC that contains the network interface for which the traffic is recorded. |
Figure: AWS S3 Access
Note: Once the Access Key creation process ended, you will not have access to and cannot retrieve the Secret access key from the AWS platform. Make sure to store the Secret access key provided by AWS in a safe environment.
In this filed, enter the IP address for GigaFlow to use when it creates new virtual devices. The IP addresses of these virtual devices start with this given IP and are incremented by 1 with each new created device.
Default value:
Note: The polling of the S3 buckets takes place every 2 minutes for the selected buckets. Before the buckets are polled, GigaFlow collects EC2 information for all the selected regions.
If you do not want to use this Delete feature and you want to store the log files for later use on AWS, then VIAVI provides a script to migrate the log files. To store the VPC log files for a longer duration while at the same time not progressively increasing the time for GigaFlow polling AWS for available logs, we suggest moving these logs from the S3 bucket into an AWS Lambda service instance.
|
In case you experience delays in AWS when transferring VPC flow logs to the S3 bucket, they can set the Flow Reception Window to a corresponding covering value and the Client Buffer Window to that value plus 2 minutes, to have GigaFlow and Apex synchronized for this situation. For example, if you experience AWS VPC flow logs delivery delays fluctuating under 15 minutes, then set the GigaFlow Flow Reception Window to 15 minutes (900000 milliseconds) and the Client Buffer Window to 17 minutes (1020000 milliseconds). |
Modify an existing connection
To edit a previously created connection, perform the following steps:
Delete a connection
To delete an existing connection, perform the following steps:
Click the
The AWS Bucket Statistics page opens in a new tab. On this page you can also see how many records GigafLow found per S3 file.
The Private Networks functionality is used to determine what cloud based traffic is internal to your organisation and what traffic is external.
The external traffic is then marked as
In the Private Networks frame, Gigaflow first adds the following private networks:
Then GigaFlow scans your associated AWS account for all subnet information. Using this information GigaFlow creates the so called supernets which are used to define what IP addresses are private to the customer, everything else is considered public traffic.
In the New Private Network frame, Gigaflow lets you add your own private networks. To do this, perform the following steps:
The AWS EC2 Discovered Subnet Information frame lets you see the AWS discovered subnets. When you add your AWS credentials, GigaFlow uses them to gather the subnet information and populate this table.
To refresh this AWS subnet table, click the Refresh button in the upper-left corner of the AWS EC2 Discovered Subnet Information frame. The refresh request can take up to 2 minutes to complete.
Note: Subnets are automatically obtained from AWS every 1 hour.
Only the largest subnets are kept to identify the public or private traffic. For example, since subnet
To see a list of all EC2 information collected by GigaFlow for use during enrichment of device and interface information in the GigaFlow UI, click the
The AWS EC2 Instance Information page opens in a new tab.
Specific device reports
To see the reports on a specific device, perform the following steps:
The Enterprise Dashboard for the selected device opens.
The Forensics dashboard for the selected interface opens.
As a result, GigaFlow shows a list with every individual flow record related to that specific interface.
The NSG Flow Log Ingestion function allows GigaFlow to treat Azure NSG flow logs in the same way as it handles the flow records. Hence, it provides the user the ability to report on on-premises and cloud based network traffic.
Get your Azure Subscription ID
To obtain your Subscription ID in Azure, perform the following steps:
A results dialog shows.
Note: Save the Subscription ID to a text file. You will use this code later in the configuration process, in the GigaFlow UI.
Add a registered application in Azure
To add a registered application in Azure, perform the following steps:
A results dialog shows.
The App registrations page shows.
The Register an application page shows.
You are redirected to the registry page of the new application.
The Certificates & secrets page shows.
The Add a client secret popup dialog shows on the right side.
The newly created secret shows in the Client secrets tab.
Note: For more details on adding a registered application in Azure, access the following link Quickstart: Register an application with the Microsoft identity platform.
Assign permissions to the storage and the Virtual Machine (VM)
To assign permissions to the storage and the VM, perform the following steps:
Add the Storage Blob Data Reader role
A results dialog shows.
The overview page related to that storage account shows.
The Add role assignment page shows.
The Select members popup dialog shows on the right side.
The expected result is for the Added Role assignment message to show in the top right of the dialog.
Add the Reader and Data Access role
The Add role assignment page shows.
The Select members popup dialog shows on the right side.
The expected result is for the Added Role assignment message to show in the top right of the dialog.
Add the Virtual Machine Contributor role
A results dialog shows.
The overview page related to that subscription shows.
The Add role assignment page shows.
The Select members popup dialog shows on the right side.
The expected result is for the Added Role assignment message to show in the top right of the dialog.
Add the Reader role
The Add role assignment page shows.
The Select members popup dialog shows on the right side.
The expected result is for the Added Role assignment message to show in the top right of the dialog.
If you do not have your NSG Flow logs enabled in Azure, follow the Enable the microsoft.insights provider and Enable the NSG Flow Logs below.
Enable the microsoft.insights provider
To enable the microsoft.insights resource within your subscription, perform the following steps:
A results dialog shows.
The overview page related to that subscription shows.
The Resource providers dialog shows.
Note: Wait until the provider is registered.
Enable the NSG Flow logs
To enable the NSG Flow logs, perform the following steps:
A results dialog shows.
The Flow logs dialog shows.
The Select network security group page shows.
You are redirected back to the Create a flow log page.
Note: After you enabled the NSG Flow logs in Azure, wait until the storage container has data, then go to GigaFlow and rescan the account to detect the new storage container.
Note: For more details on enabling the NSG Flow logs in Azure, access the following link: Flow logging for network security groups.
Configure access to Azure in GigaFlow UI
To configure your access to Azure in the GigaFlow UI, perform the following steps:
Note: A popup message notifies you that a test is performed to check the correctness of the information you provided. If there are a large number of resources to collect, then the process may take some time to complete.
Modify a connection
To edit a previously created connection, perform the following steps:
Delete a connection
To delete an existing connection, perform the following steps:
To refresh an existing connection, perform the following steps:
GigaFlow immediately sends a new request to Azure for storage account, VM instance, and flow log records.
Azure NSG flows information links
VM Instances
Click the
The Azure VM Instance Information page opens in a new tab. For each VM discovered by GigaFlow, this page displays all the attributes available from Azure, including the following:
Storage Accounts
Click the
The Azure Storage Account Information page opens in a new tab. This page displays the following:
To display which MAC addresses are being monitored and the IP address of the related device, perform the following steps:
The Azure Storage Account Container frame shows.
Note: To force the process, refresh the connection in the Cloud Services > Azure tab.
The Azure Storage MAC Address Blocks frame shows.
Note: This table displays how long it took to process each block of NSG log data, its size, and what period it covered.
Located at Configuration > GEOIP.
You can change the geolocation and IP settings here.
To add new GEOIP overrides:
"Start IP,End IP,Country ISO Code,Region Name,Latitude,Longitude"
"IP/MaskBits,Country ISO Code,Region Name,Latitude,Longitude.
A table of existing GEOIP overrides is shown in the table below this.
You can select the number of items to show from the dropdown menu above the table; the default is 50 items.
Information displayed includes:
You can also search by entering a country code or clicking on the map below the table.
Located at Configuration > Infrastructure Devices.
To add a new infrastructure device:
To bulk-add new devices, i.e. more than one device at a time:
ip
ip, communityString
ip, communityString, deviceName
Use a new line for each new device added.
Recheck Forensics by clicking Refresh .
This shows a more detailed version of the Existing Devices table, with additional statistics for each device.
The Existing Device(s) table lists all connected infrastructure devices.
You can select how many of the devices to view, i.e. the most recent 10, 25, 50, 100 or all devices.
At the top of the table, the total number of infrastructure devices is given. You can also search for a particular device. Each column is sortable. The table displays interactive information, including:
The Device SNMP Mapping table displays:
Located at Configuration > Infrastructure Devices > Detailed Device Information.
By clicking any device IP address in the Existing Devices table, you can bring up a detailed overview of that device.
On loading, the device name, device IP address and device ID are given across the top of the page.
Below this information, you will see the SNMP Settings panel. SNMP information is listed here. This includes:
To change the device name:
NOTE: For the flows ingested from cloud providers (for example, AWS or Azure), GigaFlow will use the Infrastructure Device name that the cloud provided. If you change the Infrastructure Device name to a custom one, then all the newly ingested flows from the cloud, for that reporting device IP, will actually overwrite its name.
To change SNMP version and/or community:
To test SNMP settings and status:
All other device information is populated automatically from the device and cannot be edited.
This section identifies the issues that show on a particular device. It shows the queried OID(s) (Object Identifiers) of a device for which GigaFlow had issues and the number of times it tried the query resulting in that issue.
The Empty response table shows when the SNMP query was successful, but the table returned was empty.
The No response shows when there was no response to the SNMP query. This can occur due to bad credentials or due to the security configuration on the end device.
To add attributes to the device:
There are quick-links to useful tools, including:
And links to associated integrations (Integrations) and servers (Server Discovery). See also System > Global for more about integrations.
See Reports > Forensics for more.
In this panel, you can view and make changes to the device information storage settings:
At the bottom of the detailed infrastructure device settings page, there are several tabs:
The Interfaces tab consists of an editable table with the following information:
The Subnets tab displays a list of SNMP discovered subnets. See also Reports > System Wide Reports > Subnet List.
The Flow Templates tab displays a list of the Netflow templates used, e.g.:
The Stats tab consists of a list of the number of VLAN, ARP and CAM entries.
The VLANs tab consists of a list of VLANs.
The Bridge Ports tab consists of a list of bridge ports.
The Bridge PortNumbers To ifIndex tab consists of a list of bridge ports.
The Application Mapping tab displays a list of mapped applications with the key, appid and application listed for each mapping.
The SNMP Stats tab consists of a table that contains all the statistics for the SNMP requests sent by the device. This is the same table as on the System Status > SNMP Stats page, except for the 2 columns used to identify the device: Device IP and Device Name.
Enter:
Enter:
Located at Configuration > Profiling.
This is where you can create profiles that define the normal behaviour of your network.
To get going, create your first profile.
Step one is then to create a new Flow Object at Configuration > Applications. To create a flow object:
To create a new Profiler (this term is used interchangeably with profile):
This table displays a list of existing profiles.
Located at Configuration > Reporting.
One of the difficult aspects of reporting on network flows is the number of possible field combinations, with 25+ fields in the extended range. GigaFlow records all fields for all flows with no summarization and no deduplication. GigaFlow allows you to create exactly the report you want.
You can change reporting settings here, in the General and Forensics Reports panels.
Allows you to add new entries to be displayed in the left hand navigation under the Reports option.
You can enter:
Allows you to import a JSON representation of new entries to be displayed in the left hand navigation under the Reports option.
You can edit the general reporting settings in this panel:
See Appendix > Forensic Report Types for a complete description of the different report types. See also Reports > Forensics for the Direct Filtering Syntax used by GigaFlow.
You can view and clone built-in forensics reports in this panel.
From the Report drop-down menu, select the report type to view or clone. The default selection is Application Flows. In the panel below, you can view:
select srcadd as srcadd,dstadd as dstadd,appid as appid, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcadd,dstadd,appid ORDERBY LIMITROW
bits_total
select FIRSTSEEN as afirstseen,srcadd as srcadd,dstadd as dstadd,appid as appid, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcadd,dstadd,appid order by srcadd,dstadd,appid,afirstseen
afirstseen
bits_avgsec
srcadd__dstadd__appid
To clone a report:
To add a new DSCP name:
Lists the existing user defined report URLS that are available from the left hand navigation under the Reports option.
The table lists:
Located at Configuration > Server Subnets. See also Reports > First Packet Response.
To begin using First Packet Response (FPR), you must specify the server subnets and ports that you would like to monitor. FPR monitors TCP and UDP traffic, e.g. DNS 53/UDP, and is inherently multi-threaded by device.
To add a new server subnet:
This tab allows you to view a list of existing monitored server subnets and to add new subnets. The main table displays the following information:
Actions, i.e. edit or delete a subnet. Editing a subnet allows you to specify a particular port to monitor.
This tab allows you to view a list of identified servers on the monitored server subnets. The main table displays a list of the infrastructure devices, routers, involved in transactions to or from the server subnets and associated servers.
This tab allows you to view a list of infrastructure devices, routers, involved in transactions to or from the server subnets and associated servers.
You can select the number of items to show from the drop-down menu above the table; the default is 10 items.
Located at Configuration > Sites.
Sites are subnet and IP range aliases.
To define a new site:
To add a new site:
Sometimes, it can be useful to define a site by subnet and by infrastructure device. For example, a corporate network could have a subnet served by more than one router. To create a granular view of flow through each router, a separate site could be created for each router, defined by the IP range of the subnet as well as the device name. You can type to filter the device list.
To add a device to a site:
To bulk-add new sites, i.e. more than one site at a time:
name,description,startip,endip,natted(y/n),deviceIP
Use a new line for each new site.
To edit a site definition, click Configuration > Sites and click on the site name or on the adjacent drill down icon . This will bring up a new page for that site where you can edit the group definition.
The table of sites shows:
To edit a site definition, click Configuration > Sites and click on the site name or on the adjacent drill down icon . This will bring up a new page for that site where you can edit the group definition.
For each SNMP discovered subnet, the table displays:
Figure: The System Settings menu
This is where you can make changes to the system settings.
Located at System > Alerting.
Configure GigaFlow to alert you when something happens. Syslog or mail alerts can be set up for any of the following:
To configure a system log alert:
172.16.254.1, 172.16.254.2
IP_Address:Port:Facility:Level
e.g. 172.21.40.1:515:4:23,172.21.40.2:516:23:4
Sample Syslog Message Types
{"Application":"HTTPS TCP/443","Eventer":"172.0.0.1","Profile":"Facebook","appid":393659,"bytes":1447,"device":"172.0.0.1","domain":"", "dstadd":"172.0.0.1","dstport":50561,"duration":288,"eventname":"Profiler","flags":26,"fwevent":0,"fwextcode":0,"inif":12, "macdst":"00:00:00:00:00:00","macsrc":"00:00:00:00:00:00","outif":30,"packets":10,"proto":6,"srcadd":"172.0.0.1","srcport":443, "time":1475147589242,"timeH":"29-Sep-2016 12:13:09.242","tos":40,"user":""}
{"Application":"BitTorrent TCP/6881","Black List":"http://lists.blocklist.de/lists/bots.txt","Black List
Type":"Source","Eventer":"172.0.0.1","appid":400097,"bytes":120,"device":"172.0.0.1","domain":"","dstadd":"172.0.0.1",
"dstport":63533,"duration":1380, "eventname":"Black List Src","flags":20,"fwevent":0,"fwextcode":0,"inif":12,"macdst":"00:00:00:00:00:00","macsrc":"00:00:00:00:00:00", "outif":30,"packets":3,"proto":6,"srcadd"
{"Application":"TCP/49755","Eventer":"172.0.0.1","Syn Type":"Source","appid":442971,"bytes":152,"device":"172.0.0.1","domain":"","dstadd":"172.0.0.1","dstport":54350, "duration":1804,"eventname":"Syn Src Network Sweep", "flags":2,"fwevent":0,"fwextcode":0,"inif":30,"macdst":"00:00:00:00:00:00","macsrc":"00:00:00:00:00:00", "outif":12,"packets":3, "proto":6,"srcadd":"172.0.0.1","srcport":49755, "time":1475147666238,"timeH":"29-Sep-2016 12:14:26.238","tos":0,"user":""}
{"Application":"TCP/34056","Eventer":"172.0.0.1","Syn Type":"Destination","appid":427272,"bytes":152,"device":"172.0.0.1","domain":"","dstadd":"172.0.0.1", "dstport":34056,"duration":1124,"eventname":"Syn Dst Port Sweep","flags":2,"fwevent":0,"fwextcode":0,"inif":14,"macdst":"00:00:00:00:00:00","macsrc":"00:00:00:00:00:00", "outif":30,"packets":3,"proto":6,"srcadd":"172.0.0.1","srcport":57058, "time":1475147662254,"timeH":"29-Sep-2016 12:14:22.254","tos":0,"user":""}
The IP addresses in these sample Syslog messages have been replaced with 172.0.0.1.
To configure a system email alert:
Toggle the Yes or No options for any of alert situations. Click Save to store the server address.
This table lists different possible event triggers with options to: (i) create and event and (ii) trigger a script to run.
The event triggers include:
Located at System > Event Scripts.
This is where you can add your own scripts to create new workflows. A table lists existing scripts with the following information:
To add a script:
Located at System > GigaFlow Cluster.
A single GigaFlow server can be configured to search for IP addresses across many remote GigaFlow servers directly from Viavi's Apex system. This feature is useful for large organisations that may have many GigaFlow servers monitoring different networks within the organisation, e.g. in different regions. The central administrator may want a view across the entire network, e.g. to determine if a particular suspect IP address has been recorded by routers on different networks.
In this example, assume that you are the main administrator and you want visibility on several remote GigaFlow servers.
The set-up is:
Figure: Defining a GigaFlow cluster
Log in to GigaFlow Server #0, the Pitcher, and navigate to System > GigaFlow Cluster.
In the This Server panel, you will see a pre-generated unique secret. Leave this as is.
In another browser tab or window, log into Receiver 1 (GigaFlow Server #1). Copy the server shared from Receiver 1's This Server panel. You do not need to do anything with the New Cluster Server panel on the receivers.
Figure: This Server panel
Switch back to the Pitcher (GigaFlow Server #0). In the New Cluster Server panel, perform the following steps:
NOTE: This IP address is used by the Pitcher to generate a secure hashed key for communication. The receiver reverses this hash using, among other things, the IP address of the Pitcher. An intermediate firewall (NAT) could create problems if the Pitcher does not create the hashed key using the IP address seen by the receiver.
NOTE: This user must exist on Receiver 1. If it does not exist, then switch to Receiver 1 and create a new normal user on Receiver 1 (for example, reportuser).
RESULT: The Pitcher connects to Receiver 1 using the Admin user to verify that everything is correct and to populate the table in the Cluster Access frame, on the System > GigaFlow Cluster page. Receiver 1 shows in the main table.
Figure: New Cluster Server panel
NOTE: The cluster server feature is flexible, this means that a receiver in one cluster can be a pitcher for another.
The Cluster Access panel shows cluster access status. The table lists:
Following the search link from Apex, you will be brought to a new tab and the log in screen for the Pitcher machine. After logging in, you will be brought to the GigaFlow Cluster report page. This displays a list of hits for this IP address across the cluster; in this example, the IP address might be found on devices monitored by all three receivers. Clicking on a receiver name brings up the forensics report summary for that IP address from that receiver in the table at the bottom of the summary page. Alongside each receiver on the results page is a link out to that particular server.
Figure: Conducting a GigaFlow Cluster search
See also Reports > Cluster Search.
Communication between all clients in a Gigaflow cluster is IP to IP, i.e. unicast. The traffic is routed over https, using TLS based on certificates.
New icons along top.
Located at System > Global.
You can find most of the system-wide settings here.
Along the top of the page are quick links to the different settings options. These are the following:
In the General settings box, you can change:
Before you can authenticate users using LDAP or AD, you must set the LDAP (Lightweight Directory Access Protocol) server.
Examples for an LDAP server integration :
Examples for Active Directory server integration :
In the SSL settings box, you can view and select or change:
Saving will restart the HTTPS service which may take up to one minute.
Save ZeroMQ Settings.
Save Kafka Settings.
Save Syslog Settings.
Click the second icon from the left, above the main tabs, to add an SNMP v2 Settings Community.
SNMP allows GigaFlow to poll sending devices. GigaFlow supports multiple concurrent pollers. Each device is serviced every 30 minutes. SNMP polling is used to retrieve:
In the SNMP V2 settings box, you can add or delete new SNMP V2 community strings. To add or delete a community string:
Click the third icon from the left, above the main tabs, to add an SNMP v3 Settings Commuity.
SNMP allows GigaFlow to poll sending devices. In the SNMP V3 settings box, you can view and delete SNMP v3 community strings.
In the Log settings box, you can view the log file and change the logging level:
These are the proxy settings used to access GigaFlow's homeworld server. The information is required for several reasons: (i) to allow GigaFlow to call home and register itself; (ii) to let us know that your installation is healthy and working; (iii) to update blacklists. All calls home are in cleartext. See System > Licences for more.
Click the fourth icon from the left, above the main tabs, to add a MAC Vendor. To add a new vendor prefix:
Existing vendor prefixes are displayed in the table below alongside the date created and available actions, i.e. modify or delete.
If you are using email alerts, you can configure email here. To set up email:
In the Storage settings box, you can make changes to the following settings:
To commit changes, click Save Storage Settings.
The following information is given for each flow device:
Netflow
The minimum Forensics Storage is 21 days. Forensics data is stored in tables for up to four hours to speed up search and reporting. These tables are rolled into one-day tables after the Forensics Rollup Age period; this is four days by default. (see System > Global.) This is a minimum period.
It is important to set aside space for drive monitoring. GigaFlow will fill the disk drives until the pre-defined minimum amount of free space is left. GigaFlow caps the storage that any particular device is using for forensics data; this is 50 GB by default. This cap can be changed globally and on a per device basis which in turn sets an overall cap on the amount of space used by GigaFlow.
Type | Resolution | Table Duration (default) | Retention | Setting Involved |
---|---|---|---|---|
Raw Flows | millisecond | 1 hour-1 day | 21 days | Min Free Space, Default Device Storage Space, Min Forensics Storage, Forensics Rollup Age. |
IP Search | millisecond | 1 day | 21 days | IP Search Duration. |
Events | millisecond | 4 hour | 100 days | Event Storage Period, Event Summary Storage Period. |
ARP | millisecond | 1 day | 100 days | ARP Storage Period. |
CAM | millisecond | 1 day | 100 days | CAM Storage Period. |
Interface Summaries | Minute | 2 day | 200 days | Interface Summary Storage Period. |
Traffic Summaries | Hour | 7 day | 200 days | Interface Summary Storage Period. |
An integration allows you to call defined external web pages or scripts directly from the Device Interface Overview page. To add an integration, navigate to System > Global. In the Integrations settings box, you can add or change an integration.
There are two types of integration:
To add an integration:
Example 1
In the code below, the populated field tells the software that you want to populate the device field with the IP address and the ifindex field with the ifindex. These fields (device and ifindex) will be passed to the target. We also have required fields, which are fields which the user is required to populate.
{ 'populated':{ 'device':'flow_device', 'ifindex':'flow_ifindex' }, 'required':[ {'name':'user','display':,'type':'text','value':}, {'name':'password','display':'Password','type':'password'}, {'name':'macro','display':'What Macro?','type':'select','data':['macro1','macro2','macro3','macro4']}
] }
Example 2:
var ProcessBuilder = Java.type('java.lang.ProcessBuilder'); var BufferedReader = Java.type('java.io.BufferedReader'); var InputStreamReader= Java.type('java.io.InputStreamReader'); output.append(data); output.append("Device IP:"+data.get("device")+" "); output.append("IFIndex:"+data.get("ifindex")+" "); try { // Use a ProcessBuilder //var pb = new ProcessBuilder("ls","-lrt","/"); //linux var pb = new ProcessBuilder("cmd.exe", "/C", "dir"); //windows output.append("Command Run"); var p = pb.start(); var is = p.getInputStream(); var br = new BufferedReader(new InputStreamReader(is)); var line = null; while ((line = br.readLine()) != null) { output.append(line+""); } var r = p.waitFor(); // Let the process finish. if (r == 0) { // No error // run cmd2. } output.append("All Done"); } catch ( e) { output.append(e.printStackTrace()); } log.warn("end")
In this tab you can:
The new GigaStor will appear in the list of Existing GigaStors.
Choose how you want to use the Trace Extraction functionality from the options under Allow Trace Extraction:
For more information on Trace Extraction, refer to Extract a GigaStor trace file.
View and configure the GigaStors you have added:
In this tab you can perform the following actions:
The Import/Export feature allows you to easily import or export the following GigaFlow system settings and configuration:
A typical use case for this functionality is exporting the settings configured on an existing GigaFlow server to import them on a newly deployed GigaFlow server.
Since defining Profiles and Flow Objects can be time intensive, the
Import/Export feature provides an error free mechanism to replicate the definitions.
To import the GigaFlow configuration and system settings, perform the following steps:
configsettings
folder, under your GigaFlow installation folder.
Note: The import action replaces the previous GigaFlow configuration and system settings.
Note: If you open the .json file, then make sure not to delete the "Metadata"
section. This section contains the version of the GigaFlow instace from which the export was done.
Located at System > Licenses.
Here you can view all the licences associated with GigaFlow.
The GigaFlow License table lists:
The license is also viewable as a JSON object at the bottom of the page.
All 3rd party licenses are listed in the next table. These are:
All communications between your installation of GigaFlow and our servers are carried out in cleartext over HTTP; you have complete visibilty. You can view call home data at System > Licenses.
All communications between your installation of GigaFlow and our servers are carried out in cleartext; you have complete visibilty. You can view call home data at System > Licenses.
A typical response to a call home might look like. You can view call home response data at System > Licenses.
You can view all call home errors at System > Licenses.
JSON object version of GigaFlow license.
Add Port and Netflow Processing Threads links at top of page.
Located at System > Receivers.
Here you can view and edit GigaFlow's defined Netflow receivers.
It is important to ensure that the installation is ready to listen. Senders on your network will be configured to send to the GigaFlow server address and to a defined port; these must match the receivers.
GigaFlow is built to receive and process flow records and session-based syslogs. GigaFlow can also process syslog messages relating to specific user or IP authentication details.
Flow records are processed automatically into the flow databases. The records are also checked against blacklisted IPs and against defined Profilers. See System > Blacklists and Profiling.
The syslog messages are sent to the syslog processor for parsing.
Here you can see the existing listener ports available and if they are receiving flows or syslogs.
You can select the number of ports to show from the dropdown menu above the table, i.e. 10, 25, 50, 100 or all. The default is 50 items. The total number of ports is displayed at the top of the table. The information displayed includes:
To add a new port, click the + icon at the top of the page. Then:
To edit the number of Netflow processing threads, enter the new number and click Save.
The Port Threads table displays summary information about the port threads. This includes:
Infrastructure visibility is achieved using session-based Syslog. Located at System > Syslog Parsers.
System log parsing rules can be viewed and edited here.
All existing syslog parser patterns are listed here. Information displayed includes:
Default system Syslog parser patterns are listed here. Information displayed includes:
Click + to add the corresponding Syslog parser from the drop-down menu.
Network Address Translation is a method of remapping one IP address space into another by modifying the network address and port information in the IP header of packets while they are in transit across a traffic routing device.
If available in a NetFlow record, then GigaFlow will automatically ingest NAT fields. To allow the ingestion of Syslog flow records which contain session or NAT fields, enable a syslog receiver in GigaFlow using the System > Receivers dialog to receive those messages. Then, to configure the required Syslog Parsers, perform the following steps:
A related new entry is added to the All Patterns table for the selected syslog message.
The Edit Pattern dialog shows.
Note: If the pattern is correct, then the Matches table is populated with the resulting number of entries.
The Session ID field was added but the users will not typically utilize it. GigaFlow creates a unique Session ID for all NAT records. When a flow or a syslog record indicates a NAT translation, each forensic flow that is created receives a unique ID that links them together.
Note: If the syslog contains a session ID, then also select the Session ID field.
Note: When you add a pattern in the Default Patterns table, you enable it and it is then used to match syslog messages as they are ingested in real time.
Once flows are ingested using the pattern, you will find the NAT device(s) in the Configuration > Infrastructure Devices dialog. Here you can see all the devices that GigaFlow is receiving NAT data for. If the Nat column is marked as true, then the related device is available in the NAT Reporting dialog.
Syslog exceptions are listed here. To remove the current entries and repopulate the table with new exceptions, click the Remove all exceptions icon in the upper-right corner of the Exceptions frame. Depending on the rate of the syslog messages, you may have to wait for the new syslog messages (that do not match a pattern) to be ingested.
The displayed information includes:
Located at System > System Health. Here you can find timelines of:
Located at System > System Status.
This page displays memory performance. The start time and uptime are displayed prominently at the top of the page.
The main table lists:
The Stats sections details:
On this page you find the list of servers that GigaFlow discovered from previous VPC flow records. This list is used to identify servers when there are no other methods to determine the server side of the conversation (either source or destination).
On this page you can find all the statistics for the SNMP requests sent by all your SNMP devices. The columns of this table have the following descriptions:
This page displays the performance of database connections. The database start time and uptime are displayed prominently at the top of the page.
The DB connections table lists the most recent database connections. See PostgreSQL documentation for more.
Select how many entries to view from the drop-down menu, i.e. the 10, 50, 100 or All. The default selection is 50.
The table lists:
You can view any PostgreSQL database deadlocks to help with troubleshooting performance problems. See PostgreSQL documentation for more. The table lists:
This page displays named table performance. The start time and uptime are displayed prominently at the top of the page. See PostgreSQL documentation for more.
Select how many entries to view from the drop-down menu, i.e. the 10, 50, 100 or All. The default selection is 10. You can also search for specific tables.
This page displays prepared table performance. The start time and uptime are displayed prominently at the top of the page. See PostgreSQL documentation for more.
Select how many entries to view from the drop-down menu, i.e. the 10, 50, 100 or All. The default selection is 10. You can also search for specific tables.
Located at System > Users.
New Add links along top of page.
Here you can view and edit information for all GigaFlow users and user groups. There are three categories of user on GigaFlow:
Select how many entries to view from the drop-down menu, i.e. the 10, 50, 100 or All. The default selection is 10. You can also search for specific usernames or IDs using the search box.
All users are listed here. Information displayed includes:
To add a new local user:
To add a new user group:
This table lists all existing user groups along with associated abilities.
For each user group:
Information displayed includes:
Enter the domain name and ID of the user you want to allow to log into the system. They will be added as normal users.
To add a new LDAP user:
To add a new LDAP user group:
Information displayed includes:
Information displayed includes:
To add a new portal user:
This table lists all of the currently configured Data Access Groups.
For each Data Access Group:
This function allows you to apply a filter to the data a user requests automatically. These filters are based on Sites and a user can have access to many Sites using this feature.
To add a new Data Access Group:
Once added, you can edit the group to control which Sites are assigned to it, in the Users own settings you can then assign them to any available Data Access Group.
Located at System > Watchlists.
GigaFlow uses a blacklist compiled from multiple online sources. These lists are retrieved every hour and merged into a list of about 30,000 potentially dangerous IP addresses. Your GigaFlow system checks this hourly-updated list every five minutes and updates the local list when necessary.
We are reviewing our code watchlist terminology and will update this documentation to reflect any changes.
This table lists the online blacklists available with relevant information. Information includes:
To add a new blacklist source:
To refresh blacklists automatically, select Yes.
GigaFlow can alert on flow entries that match known bad IP addresses, scanning or outside profiles. Whitelisting provides the facility to tell the checking mechanism in GigaFlow that particular IPs or subnets should not raise an exception on any of the defined conditions. When defining a whitelist you should specify a reason for excluding a host or hosts. If you set the IP address to that of the infrastructure device and the mask to zero, GigaFlow will whitelist all traffic for the device.
The whitelist table shows a list of the current whitelisted items, including the following information:
You can select the number of items to show from the dropdown menu above the table; the default is 50 items.
To add a new whitelist entry:
The locally defined blacklist table shows a list of the existing locally-defined blacklists, including the following information:
To add a new local blacklist:
Adding a noproxy=true flag to a watchlist URL allows the system to bypass proxy servers and access the list directly, i.e.
{URL to file}?NOPROXY=true
See Configuration > Reporting for instructions on how to configure and create new report types.
The different types of pre-loaded report are listed here. Refer to the Direct Filtering Syntax table above. Queries are performed on the PostgreSQL using SQL.
The Application Flows report is commonly used.
When this report is run for an infrastructure device on your network, GigaFlow returns the following information for each application flow associated with the device:
The different report types are:
select srcadd as srcadd,dstadd as dstadd,appid as appid, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcadd,dstadd,appid ORDERBY LIMITROW
bits_total
select FIRSTSEEN as afirstseen,srcadd as srcadd,dstadd as dstadd,appid as appid, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcadd,dstadd,appid order by srcadd,dstadd,appid,afirstseen
afirstseen
bits_avgsec
srcadd__dstadd__appid
Table Query: select srcadd as srcadd,dstadd as dstadd,appid as appid, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcadd,dstadd,appid ORDERBY LIMITROW
Table Value Field:
Graph Query: select FIRSTSEEN as afirstseen,srcadd as srcadd,dstadd as dstadd,appid as appid, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcadd,dstadd,appid order by srcadd,dstadd,appid,afirstseen
Graph Time Field:
Graph Value Field:
Graph Key Field(s) separated by __: srcas
Table Query: select srcas as srcas, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcas ORDERBY LIMITROW
Table Value Field:
Graph Query: select FIRSTSEEN as afirstseen,srcas as srcas, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcas order by srcas,afirstseen
Graph Time Field:
Graph Value Field:
Graph Key Field(s) separated by __:
Table Query: select srcas as srcas,dstas as dstas, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcas,dstas ORDERBY LIMITROW
Table Value Field:
Graph Query: select FIRSTSEEN as afirstseen,srcas as srcas,dstas as dstas, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcas,dstas order by srcas,dstas,afirstseen
Graph Time Field:
Graph Value Field:
Graph Key Field(s) separated by __:
srcas__dstas
Table Value Field: bits_total
Graph Query: select FIRSTSEEN as afirstseen,srcadd as srcadd,dstadd as dstadd, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcadd,dstadd order by srcadd,dstadd,afirstseen
Graph Time Field: afirstseen
Graph Value Field: bits_avgsec
Graph Key Field(s) separated by __: srcadd__dstport
Table Query: select srcadd,dstport,count(distinct(dstadd)) as dstcount from netflow WHERECLAUSE group by srcadd,dstport ORDERBY LIMITROW
Table Value Field: dstcount
Graph Query: select FIRSTSEEN as afirstseen, srcadd,dstport,cast((count(distinct(dstadd))) as bigint) as dstcount from netflow WHERECLAUSE group by afirstseen,srcadd,dstport order by srcadd,dstport,dstcount,afirstseen asc
Graph Time Field: afirstseen
Graph Value Field: dstcount
Graph Key Field(s) separated by __:
Table Query: select dstadd as dstadd, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by dstadd ORDERBY LIMITROW
Table Value Field: bits_total
Graph Query: select FIRSTSEEN as afirstseen,dstadd as dstadd, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,dstadd order by dstadd,afirstseen
Graph Time Field: afirstseen
Graph Value Field: bits_avgsec
Graph Key Field(s) separated by __:dstadd
Table Query:
select srcadd as srcadd, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcadd ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,srcadd as srcadd, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcadd order by srcadd,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
srcadd
Table Query:
select firstseen,duration,device,customerid as tgsrc,engineid as tgdst,userid,userdomain,srcadd,dstadd,srcport,dstport,appid,postureid,nexthop,srcmac,dstmac,device||'_'||inif as difin,device||'_'||outif as difout,pkts,bytes*8 as bits,flags,proto,tos,srcas,dstas,spare as fpr,url,fwextcode,fwevent from netflow WHERECLAUSE ORDERBY LIMITROW
Table Value Field:
firstseen
Graph Query:
select FIRSTSEEN as afirstseen, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen order by afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
Table Query:
select firstseen,duration,cast( spare as integer) as fpr,device,customerid as tgsrc,engineid as tgdst,srcadd,dstadd,srcport,dstport,appid,nexthop,srcmac,dstmac,inif,outif,pkts,bytes*8 as bits,flags,proto,tos,srcas,dstas from netflow WHERECLAUSE and spare>0 ORDERBY LIMITROW
Table Value Field:
firstseen
Graph Query:
select FIRSTSEEN as afirstseen,cast( avg(spare) as integer) as maxfpr from netflow WHERECLAUSE and spare >0 group by afirstseen order by afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
maxfpr
Graph Key Field(s) separated by __:
Table Query:
select firstseen,duration,cast( spare as integer) as fpr,device,customerid as tgsrc,engineid as tgdst,srcadd,dstadd,srcport,dstport,appid,nexthop,srcmac,dstmac,inif,outif,pkts,bytes*8 as bits,flags,proto,tos,srcas,dstas from netflow WHERECLAUSE and spare>0 ORDERBY LIMITROW
Table Value Field:
firstseen
Graph Query:
select FIRSTSEEN as afirstseen,cast( max(spare) as integer) as maxfpr from netflow WHERECLAUSE and spare >0 group by afirstseen order by afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
maxfpr
Graph Key Field(s) separated by __:
Table Query:
select srcadd as srcadd,dstadd as dstadd,appid as appid, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcadd,dstadd,appid ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,srcadd as srcadd,dstadd as dstadd,appid as appid, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcadd,dstadd,appid order by srcadd,dstadd,appid,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
srcadd__dstadd__appid
Table Query:
select srcadd as srcadd,dstadd as dstadd,appid as appid,userid as userid, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcadd,dstadd,appid,userid ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,srcadd as srcadd,dstadd as dstadd,appid as appid,userid as userid, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcadd,dstadd,appid,userid order by srcadd,dstadd,appid,userid,afirstseen asc
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __: srcadd__dstadd__appid__userid
Table Query:
select appid ,count(*) as flowcount, count(distinct(srcadd)) as srccount, count(distinct(dstadd)) as dstcount, cast(sum(bytes*8)as bigint) as bits_total from netflow WHERECLAUSE group by appid ORDERBY LIMITROW
Table Value Field:
flowcount
Graph Query:
select FIRSTSEEN as afirstseen,appid ,count(*) as flowcount from netflow WHERECLAUSE group by afirstseen,appid order by flowcount,afirstseen asc
Graph Time Field:
afirstseen
Graph Value Field:
flowcount
Graph Key Field(s) separated by __:
appid
Table Query:
select tos as tos, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by tos ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,tos as tos, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,tos order by tos,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
tos
Table Query:
select firstseen,duration,cast( spare as integer) as fpr,device,customerid as tgsrc,engineid as tgdst,srcadd,dstadd,srcport,dstport,appid,nexthop,srcmac,dstmac,inif,outif,pkts,bytes*8 as bits,flags,proto,tos,srcas,dstas from netflow WHERECLAUSE ORDERBY LIMITROW
Table Value Field:
duration
Graph Query:
select FIRSTSEEN as afirstseen,cast( avg(duration) as bigint) as avgduration from netflow WHERECLAUSE group by afirstseen order by afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
avgduration
Graph Key Field(s) separated by __:
Table Query:
select firstseen,duration,cast( spare as integer) as fpr,device,customerid as tgsrc,engineid as tgdst,srcadd,dstadd,srcport,dstport,appid,nexthop,srcmac,dstmac,inif,outif,pkts,bytes*8 as bits,flags,proto,tos,srcas,dstas from netflow WHERECLAUSE ORDERBY LIMITROW
Table Value Field:
duration
Graph Query:
select FIRSTSEEN as afirstseen,cast( max(duration) as bigint) as maxduration from netflow WHERECLAUSE group by afirstseen order by afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
maxduration
Graph Key Field(s) separated by __:
Table Query:
select fwevent as fwevent, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by fwevent ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,fwevent as fwevent, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,fwevent order by fwevent,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
fwevent
Table Query:
select fwextcode as fwextcode, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by fwextcode ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,fwextcode as fwextcode, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,fwextcode order by fwextcode,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
fwextcode
Table Query:
select device as device,device||'_'||inif as difin,device||'_'||outif as difout, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by device,difin,difout ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,device as device,device||'_'||inif as difin,device||'_'||outif as difout, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,device,difin,difout order by device,difin,difout,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
device__difin__difout
Table Query:
select device as device,device||'_'||outif as difout, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by device,difout ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,device as device,device||'_'||outif as difout, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,device,difout order by device,difout,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
device__difout
Table Query:
select netflow.device||'_'||outif as difout ,cast(sum((bytes)*8)/(REPORTPERIOD/1000) as bigint) as pct_avg_out from netflow WHERECLAUSE group by difout ORDERBY LIMITROW
Table Value Field:
pct_avg_out
Graph Query:
select FIRSTSEEN as afirstseen,device||'_'||outif as difout ,cast(sum((bytes)*8)/(MODER/1000) as bigint) as pct_avg_out from netflow WHERECLAUSE group by afirstseen,difout order by pct_avg_out,afirstseen asc
Graph Time Field:
afirstseen
Graph Value Field:
pct_avg_out
Graph Key Field(s) separated by __:
difout
Table Query:
select device as device,device||'_'||inif as difin, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by device,difin ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,device as device,device||'_'||inif as difin, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,device,difin order by device,difin,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
device__difin
Table Query:
select device||'_'||inif as difin ,cast(sum((bytes)*8)/(REPORTPERIOD/1000) as bigint) as pct_avg_in from netflow WHERECLAUSE group by difin ORDERBY LIMITROW
Table Value Field:
pct_avg_in
Graph Query:
select FIRSTSEEN as afirstseen,device||'_'||inif as difin ,cast(sum((bytes)*8)/(MODER/1000) as bigint) as pct_avg_in from netflow WHERECLAUSE group by afirstseen,difin order by pct_avg_in,afirstseen asc
Graph Time Field:
afirstseen
Graph Value Field:
pct_avg_in
Graph Key Field(s) separated by __:
difin
Table Query:
select srcmac as srcmac,dstmac as dstmac, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcmac,dstmac ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,srcmac as srcmac,dstmac as dstmac, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcmac,dstmac order by srcmac,dstmac,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
srcmac__dstmac
Table Query:
select dstmac as dstmac, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by dstmac ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,dstmac as dstmac, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,dstmac order by dstmac,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
dstmac
Table Query:
select srcmac as srcmac, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcmac ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,srcmac as srcmac, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcmac order by srcmac,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
srcmac
Table Query:
select dstport,count(distinct(dstadd)) as dstcount from netflow WHERECLAUSE group by dstport ORDERBY LIMITROW
Table Value Field:
dstcount
Graph Query:
select FIRSTSEEN as afirstseen, dstport,cast((count(distinct(dstadd))) as bigint) as dstcount from netflow WHERECLAUSE group by afirstseen,dstport order by dstport,dstcount,afirstseen asc
Graph Time Field:
afirstseen
Graph Value Field:
dstcount
Graph Key Field(s) separated by __:
dstport
Table Query:
select dstport,count(distinct(srcadd)) as srccount from netflow WHERECLAUSE group by dstport ORDERBY LIMITROW
Table Value Field:
srccount
Graph Query:
select FIRSTSEEN as afirstseen, dstport,cast((count(distinct(srcadd))) as bigint) as srccount from netflow WHERECLAUSE group by afirstseen,dstport order by dstport,srccount,afirstseen asc
Graph Time Field:
afirstseen
Graph Value Field:
srccount
Graph Key Field(s) separated by __:
dstport
Table Query:
select srcport,count(distinct(dstadd)) as dstcount from netflow WHERECLAUSE group by srcport ORDERBY LIMITROW
Table Value Field:
dstcount
Graph Query:
select FIRSTSEEN as afirstseen, srcport,cast((count(distinct(dstadd))) as bigint) as dstcount from netflow WHERECLAUSE group by afirstseen,srcport order by srcport,dstcount,afirstseen asc
Graph Time Field:
afirstseen
Graph Value Field:
dstcount
Graph Key Field(s) separated by __:
srcport
Table Query:
select srcport,count(distinct(srcadd)) as srccount from netflow WHERECLAUSE group by srcport ORDERBY LIMITROW
Table Value Field:
srccount
Graph Query:
select FIRSTSEEN as afirstseen, srcport,cast((count(distinct(srcadd))) as bigint) as srccount from netflow WHERECLAUSE group by afirstseen,srcport order by srcport,srccount,afirstseen asc
Graph Time Field:
afirstseen
Graph Value Field:
srccount
Graph Key Field(s) separated by __:
srcport
Table Query:
select dstport as dstport, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by dstport ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,dstport as dstport, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,dstport order by dstport,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
dstport
Table Query:
select srcport as srcport, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcport ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,srcport as srcport, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcport order by srcport,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
srcport
Table Query:
select postureid as postureid, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by postureid ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,postureid as postureid, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,postureid order by postureid,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
postureid
Table Query:
select proto as proto, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by proto ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,proto as proto, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,proto order by proto,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
proto
Table Query:
select dstadd ,dstport,srccount,localports,remoteports,sessions,(bytes*8) as bits from (select dstadd as dstadd,dstport,count(distinct(srcadd)) as srccount,count(distinct(dstport)) as localports,count(distinct(srcport)) as remoteports,count(*) as sessions,cast(sum((bytes)) as bigint) as bytes from netflow WHERECLAUSE group by dstadd,dstport ) as a where a.sessions>5 and a.localports<5 and a.srccount>5 group by dstadd,dstport,srccount,sessions,localports,remoteports,bytes ORDERBY LIMITROW
Table Value Field:
srccount
Graph Query:
select afirstseen ,dstadd ,dstport,srccount as srcsavgsec,localports,bits_avgsec from (select FIRSTSEEN as afirstseen,dstadd as dstadd ,dstport,count(distinct(srcadd)) as srccount,count(distinct(dstport)) as localports,count(distinct(srcport)) as remoteports,count(*) as records,cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,dstadd ,dstport) as a where a.records>5 and a.localports<3 and a.srccount>5 group by afirstseen,dstadd ,dstport,srccount,localports,bits_avgsec order by dstadd,afirstseen asc
Graph Time Field:
afirstseen
Graph Value Field:
srcsavgsec
Graph Key Field(s) separated by __:
dstadd__dstport
Table Query:
select dstadd ,srccount,localports,remoteports,sessions,(bytes*8) as bits from (select dstadd as dstadd,count(distinct(srcadd)) as srccount,count(distinct(dstport)) as localports,count(distinct(srcport)) as remoteports,count(*) as sessions,cast(sum((bytes)) as bigint) as bytes from netflow WHERECLAUSE group by dstadd ) as a where a.sessions>5 and a.localports<5 and a.srccount>5 group by dstadd,srccount,sessions,localports,remoteports,bytes ORDERBY LIMITROW
Table Value Field:
srccount
Graph Query:
select afirstseen ,dstadd ,srccount as srcsavgsec,localports,bits_avgsec from (select FIRSTSEEN as afirstseen,dstadd as dstadd,count(distinct(srcadd)) as srccount,count(distinct(dstport)) as localports,count(distinct(srcport)) as remoteports,count(*) as records,cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,dstadd) as a where a.records>5 and a.localports<5 and a.srccount>5 group by afirstseen,dstadd,srccount,localports,bits_avgsec order by dstadd,afirstseen asc
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
dstadd
Table Query:
select srcadd ,dstcount,localports,remoteports,sessions,(bytes*8) as bits from (select srcadd as srcadd,count(distinct(dstadd)) as dstcount,count(distinct(srcport)) as localports,count(distinct(dstport)) as remoteports,count(*) as sessions,cast(sum((bytes)) as bigint) as bytes from netflow WHERECLAUSE group by srcadd ) as a where a.sessions>5 and a.localports<5 and a.dstcount>5 group by srcadd,dstcount,sessions,localports,remoteports,bytes ORDERBY LIMITROW
Table Value Field:
dstcount
Graph Query:
select afirstseen ,srcadd ,dstcount as dstsavgsec,localports,bits_avgsec from (select FIRSTSEEN as afirstseen,srcadd as srcadd,count(distinct(dstadd)) as dstcount,count(distinct(srcport)) as localports,count(distinct(dstport)) as remoteports,count(*) as records,cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcadd) as a where a.records>5 and a.localports<5 and a.dstcount>5 group by afirstseen,srcadd,dstcount,localports,bits_avgsec order by srcadd,afirstseen asc
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
srcadd
Table Query:
select srcadd ,srcport,dstcount,localports,remoteports,sessions,(bytes*8) as bits from (select srcadd as srcadd,srcport,count(distinct(dstadd)) as dstcount,count(distinct(srcport)) as localports,count(distinct(dstport)) as remoteports,count(*) as sessions,cast(sum((bytes)) as bigint) as bytes from netflow WHERECLAUSE group by srcadd,srcport ) as a where a.sessions>5 and a.localports<5 and a.dstcount>5 group by srcadd,srcport,dstcount,sessions,localports,remoteports,bytes ORDERBY LIMITROW
Table Value Field:
dstcount
Graph Query:
select afirstseen ,srcadd ,srcport,dstcount as dstsavgsec,localports,bits_avgsec from (select FIRSTSEEN as afirstseen,srcadd as srcadd ,srcport,count(distinct(dstadd)) as dstcount,count(distinct(srcport)) as localports,count(distinct(dstport)) as remoteports,count(*) as records,cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcadd ,srcport) as a where a.records>5 and a.localports<3 and a.dstcount>5 group by afirstseen,srcadd ,srcport,dstcount,localports,bits_avgsec order by srcadd,afirstseen asc
Graph Time Field:
afirstseen
Graph Value Field:
dstsavgsec
Graph Key Field(s) separated by __:
srcadd__srcport
Table Query:
select srcadd as srcadd,srcport as srcport,dstadd as dstadd,dstport as dstport,appid as appid, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcadd,srcport,dstadd,dstport,appid ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,srcadd as srcadd,srcport as srcport,dstadd as dstadd,dstport as dstport,appid as appid, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcadd,srcport,dstadd,dstport,appid order by srcadd,srcport,dstadd,dstport,appid,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
srcadd__srcport__dstadd__dstport__appid
Table Query:
select firstseen as firstseen,srcadd as srcadd,srcport as srcport,dstadd as dstadd,dstport as dstport,appid as appid,proto as proto, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by firstseen,srcadd,srcport,dstadd,dstport,appid,proto ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,firstseen as firstseen,srcadd as srcadd,srcport as srcport,dstadd as dstadd,dstport as dstport,appid as appid,proto as proto, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,firstseen,srcadd,srcport,dstadd,dstport,appid,proto order by firstseen,srcadd,srcport,dstadd,dstport,appid,proto,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
firstseen__srcadd__srcport__dstadd__dstport__appid__proto
Table Query:
select srcadd as srcadd,srcport as srcport,inif as inif,dstadd as dstadd,dstport as dstport,outif as outif,appid as appid, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcadd,srcport,inif,dstadd,dstport,outif,appid ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,srcadd as srcadd,srcport as srcport,inif as inif,dstadd as dstadd,dstport as dstport,outif as outif,appid as appid, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcadd,srcport,inif,dstadd,dstport,outif,appid order by srcadd,srcport,inif,dstadd,dstport,outif,appid,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
srcadd__srcport__inif__dstadd__dstport__outif__appid
Table Query:
select dstadd-modulus(dstadd,16777216) as dstsubneta, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by dstsubneta ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,dstadd-modulus(dstadd,16777216) as dstsubneta, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,dstsubneta order by dstsubneta,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
dstsubneta
Table Query:
select srcadd-modulus(srcadd,16777216) as srcsubneta, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcsubneta ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,srcadd-modulus(srcadd,16777216) as srcsubneta, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcsubneta order by srcsubneta,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
srcsubneta
Table Query:
select dstadd-modulus(dstadd,65536) as dstsubnetb, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by dstsubnetb ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,dstadd-modulus(dstadd,65536) as dstsubnetb, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,dstsubnetb order by dstsubnetb,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
dstsubnetb
Table Query:
select srcadd-modulus(srcadd,65536) as srcsubnetb, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcsubnetb ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,srcadd-modulus(srcadd,65536) as srcsubnetb, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcsubnetb order by srcsubnetb,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
srcsubnetb
Subnet Class C By Dest
Table Query:
select dstadd-modulus(dstadd,256) as dstsubnetc, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by dstsubnetc ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,dstadd-modulus(dstadd,256) as dstsubnetc, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,dstsubnetc order by dstsubnetc,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
dstsubnetc
Subnet Class C By Source
Table Query:
select srcadd-modulus(srcadd,256) as srcsubnetc, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by srcsubnetc ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,srcadd-modulus(srcadd,256) as srcsubnetc, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,srcsubnetc order by srcsubnetc,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
srcsubnetc
Table Query:
select dstadd-modulus(dstadd,256) as dstsubnetc,count(distinct(dstadd)) as dstcount from netflow WHERECLAUSE group by dstsubnetc ORDERBY LIMITROW
Table Value Field:
dstcount
Graph Query:
select FIRSTSEEN as afirstseen, dstadd-modulus(dstadd,256) as dstsubnetc,cast((count(distinct(dstadd))) as bigint) as dstcount from netflow WHERECLAUSE group by afirstseen,dstsubnetc order by dstcount,afirstseen asc
Graph Time Field:
afirstseen
Graph Value Field:
dstcount
Graph Key Field(s) separated by __:
dstsubnetc
Table Query:
select srcadd-modulus(srcadd,256) as srcsubnetc,count(distinct(srcadd)) as srccount from netflow WHERECLAUSE group by srcsubnetc ORDERBY LIMITROW
Table Value Field:
srccount
Graph Query:
select FIRSTSEEN as afirstseen, srcadd-modulus(srcadd,256) as srcsubnetc,cast((count(distinct(srcadd))) as bigint) as srccount from netflow WHERECLAUSE group by afirstseen,srcsubnetc order by srccount,afirstseen asc
Graph Time Field:
afirstseen
Graph Value Field:
srccount
Graph Key Field(s) separated by __:
srcsubnetc
Table Query:
select flags as flags, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by flags ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,flags as flags, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,flags order by flags,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
flags
Table Query:
select engineid as tgdst, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by tgdst ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,engineid as tgdst, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,tgdst order by tgdst,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
tgdst
Table Query:
select customerid as customerid, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by customerid ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,customerid as customerid, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,customerid order by customerid,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
customerid
Clone Name:
Table Query:
select customerid as tgsrc,engineid as tgdst, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by tgsrc,tgdst ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,customerid as tgsrc,engineid as tgdst, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,tgsrc,tgdst order by tgsrc,tgdst,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
tgsrc__tgdst
Table Query:
select url as url, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by url ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,url as url, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,url order by url,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
url
Table Query:
select userid as userid, cast((sum(bytes)*8) as bigint) as bits_total from netflow WHERECLAUSE group by userid ORDERBY LIMITROW
Table Value Field:
bits_total
Graph Query:
select FIRSTSEEN as afirstseen,userid as userid, cast(sum((bytes)*8)/(MODER/1000) as bigint) as bits_avgsec from netflow WHERECLAUSE group by afirstseen,userid order by userid,afirstseen
Graph Time Field:
afirstseen
Graph Value Field:
bits_avgsec
Graph Key Field(s) separated by __:
userid
0 ........
|
|
|
|
|
|