Knowledge Base

Ask A Question

Questions

1

Nexpose: What is Vulnerability PCI Compliance Status?

Can anyone explain what determines the Pass/Fail status of the Vulnerability PCI Compliance Status? Specifically I'm talking about the "Vulnerability PCI Compliance Status" data field available within a CSV report template. The obvious answer is CVSS >= 4 = Fail but that is not a complete answer. DoS vulns and PCI deemed "automatic failure" vulnerabilities can affect the field. But I'm also finding that vulnerabilities with an approved exception will also cause a Fail to turn into a Pass. What other factors will alter this field? Is there any way to determine what caused the change in setting? For example, if I generate a CSV export report of devices in a particular asset group, how can I demonstrate to an auditor or a QSA WHY a particular vulnerability is set to "Pass"? Two examples of vulnerabilities that are unexplainably set to pass include VMSA-2012-0018: Update to ESX glibc package (CVE-2012-3405) (Vulnerability ID: 12966) and VMSA-2012-0013: VMSA-2012-0013 Update to ESX/ESXi userworld OpenSSL library (CVE-2011-4577) (Vulnerability ID: 13203). They have a Vulnerability Severity Level of 5 and 4 respectively. They have a Vulnerability CVSS score of 5 and 4.3 respectively. Neither have an exception. I would think they would be Fail. Yet they're both Vulnerability PCI Compliance Status = Pass. About the only thing I can find to justify the score is in the dim_vulnerability table there is pci_severity_score of 2 for both vulnerabilities. But I have no idea how pci_severity_score is calculated or why that is used instead of Vulnerability Severity Level or CVSS Score.

Posted by Thao Doan 2 years ago

1

Scheduled SQL Report Distribution changes overnight

Is anyone else seeing their scheduled SQL reports change to "no email source" after being set to a global email source? New SQL reports on the current release do not seem to have the issue, until a new content update or new release (still undecided and untraceable). The working reports become broken and the only way to fix it is to build the report from scratch (copies break too!) which, as you can expect, not a viable option. I can duplicate this every day, yet there is no way to capture the change to the db setting that swithes between email sources. Steps to reproduce: find older SQL report (Pre-current release) change query to: SELECT DISTINCT ON (da.ip_address) da.ip_address, da.host_name, da.mac_address, dos.description AS operating_system, to_char(fas.scan_finished, 'MM/DD/YYYY HH24:MI:SS') as scan_finished FROM dim_asset da JOIN dim_operating_system dos USING (operating_system_id) JOIN dim_host_type dht USING (host_type_id) JOIN fact_asset_scan fas USING (asset_id) JOIN dim_tag_asset dta USING (asset_id) JOIN dim_tag dt USING (tag_id) WHERE scan_finished > NOW() - INTERVAL '30 days' set data model to 2.3.0 (yet another issue) set scope (currently 1 tag) set frequency to run daily @8:30 am set the report owner as you no other report viewers are necessary set the email source to global email source send report to owner attach report as file Save and/or run report Check back in the morning to find the report did run but didn't send and the distribution settings are reverted back to "no email source".

Posted by kbruce 2 years ago