Do you have any suggestions for how I would go about querying for log data that occurs during a specific range of hours? I'd still like the ability to change the timeframe using the standard tooling [ie Yesterday, Last7Days, etc]. This could be useful when logs during off-hours have 'noise' that may be ignored during analysis and alerting or when we want to focus on things like peak usage times for multi-day comparison. If this is possible it might be a good example to include on the LEQL page.
Posted by Ryan Peterson about a year ago
Running Nexpose/InsightVM in VMware Workstation Pro 14 on Windows 10 host. Everything works. Am able to access Security Console via 10.x.x.177:3780 I am also able to access Metasploit Pro on https://localhost:3790 Once I connect to the corporate VPN, I lose connection to Security Console. I have tried using the VM in Bridged as well as NAT mode. I have tried changing the IP of the VM (within Ubuntu) to match the /24 range and netmask of the Corporate network as well as the laptops. In certain cases I can Ping my pentest machine in the corporate network, and can in some cases ping from the VM to the Laptop. In other cases ping doesn't go in or out. My last option that I will try is to setup an SSH Tunnel from the VM, to my Pentest Host within the corp network, then back out to my Laptop. This is convoluted but it may work. Could you provide an idea of VMWare Workstation setup where I could possibly avoid this nonsense?
Posted by Scott Koff about a year ago
I've been trying to find where this might be configurable. On the Scan history page, it comes up with the scan name. It's interesting (and annoying) that when exporting, I wouldn't get the same thing I see. On a big site with many scans, not having the schedule name is less than helpful.
Posted by Russell Clements about a year ago
Hi there, Was wondering what port needs to be opened between a local scan engine and a target server in order to perform a vulnerability scan? (The targeted server is fairly locked down behind local firewall etc. however need to perform a vulnerability scan on it). Thank you.
Posted by Ali about a year ago
I am building Archer integration through a Postgres instance of the reporting replica Nexpose database. The "Vulnerability PCI Compliance Status" column does not appear to be in the reporting database table structure. I assume it's a calculation I can replicate but I do not know what it is.
Posted by Thao Doan about a year ago
Can anyone explain what determines the Pass/Fail status of the Vulnerability PCI Compliance Status? Specifically I'm talking about the "Vulnerability PCI Compliance Status" data field available within a CSV report template. The obvious answer is CVSS >= 4 = Fail but that is not a complete answer. DoS vulns and PCI deemed "automatic failure" vulnerabilities can affect the field. But I'm also finding that vulnerabilities with an approved exception will also cause a Fail to turn into a Pass. What other factors will alter this field? Is there any way to determine what caused the change in setting? For example, if I generate a CSV export report of devices in a particular asset group, how can I demonstrate to an auditor or a QSA WHY a particular vulnerability is set to "Pass"? Two examples of vulnerabilities that are unexplainably set to pass include VMSA-2012-0018: Update to ESX glibc package (CVE-2012-3405) (Vulnerability ID: 12966) and VMSA-2012-0013: VMSA-2012-0013 Update to ESX/ESXi userworld OpenSSL library (CVE-2011-4577) (Vulnerability ID: 13203). They have a Vulnerability Severity Level of 5 and 4 respectively. They have a Vulnerability CVSS score of 5 and 4.3 respectively. Neither have an exception. I would think they would be Fail. Yet they're both Vulnerability PCI Compliance Status = Pass. About the only thing I can find to justify the score is in the dim_vulnerability table there is pci_severity_score of 2 for both vulnerabilities. But I have no idea how pci_severity_score is calculated or why that is used instead of Vulnerability Severity Level or CVSS Score.
Posted by Thao Doan about a year ago
We are refining metrics and trying to determine the lifespan of vulns on our network. We'd like some benchmarks. Can you share your average time to detect vulnerabilities and the average remediation time? It's also helpful to know if you don't or can't get this information. We are using on-prem Nexpose (not InsightVM), so we are using sql queries to get this information. Thanks, Leslie
Posted by Leslie Castex about a year ago
I am new to writing custom vuln checks but I am looking to write one that triggers off of the output of a command ran as an authenticated user on Linux hosts. End goal: Detect some of the malicious Python packages out there.
Posted by Trevor Steen about a year ago
I recently downloaded and installed nexpose rapid 7 i able to access the web page through 127.0.0.1:3780 but can't login while installing it ask me to use username and password, i used username:-dua and password=dua123 but after the installation i can't login. I also tried to use default user name and password "nexadmin/nexpassword" but still can't login please help me to figure it out. thanks in advance :)
Posted by Himanshu Dua about a year ago
i am trying to run the Metasploit module MS08-067 and run the exploit on Windows XP SP3 lang:english target VM i configured the network options on the VMware to be bridged when i run the exploit the exploit completes but no session starts my target never updated , this means it isn't patched and no firewall is on and i tried two different payloads windows/shell_reverse_tcp , windows/meterpreter/reverse_tcp and windows/meterpreter/bind_tcp with no use . every time i get the same result [*] Started reverse TCP handler on 192.168.1.5:4444 [*] Automatically detecting the target... [*] Fingerprint: Windows XP - Service Pack 3 - lang:English [*] Selected Target: Windows XP SP3 English (AlwaysOn NX) [*] Attempting to trigger the vulnerability... [*] Exploit completed, but no session was created.
Posted by Thao Doan about a year ago
I using a "Shared Scan Credential" in one of my sites setup as a Samba/CIFS account for our non-domain joined windows servers. Doesn't seem to work. So how do I specify a local machine account vs a domain account? I've tried using a . and using .\ and leaving the domain blank. None of those work. Can't use the machine name for each server when you scan multiple servers with different machine names.
Posted by Aaron Wrasman about a year ago
Is anyone else seeing their scheduled SQL reports change to "no email source" after being set to a global email source? New SQL reports on the current release do not seem to have the issue, until a new content update or new release (still undecided and untraceable). The working reports become broken and the only way to fix it is to build the report from scratch (copies break too!) which, as you can expect, not a viable option. I can duplicate this every day, yet there is no way to capture the change to the db setting that swithes between email sources. Steps to reproduce: find older SQL report (Pre-current release) change query to: SELECT DISTINCT ON (da.ip_address) da.ip_address, da.host_name, da.mac_address, dos.description AS operating_system, to_char(fas.scan_finished, 'MM/DD/YYYY HH24:MI:SS') as scan_finished FROM dim_asset da JOIN dim_operating_system dos USING (operating_system_id) JOIN dim_host_type dht USING (host_type_id) JOIN fact_asset_scan fas USING (asset_id) JOIN dim_tag_asset dta USING (asset_id) JOIN dim_tag dt USING (tag_id) WHERE scan_finished > NOW() - INTERVAL '30 days' set data model to 2.3.0 (yet another issue) set scope (currently 1 tag) set frequency to run daily @8:30 am set the report owner as you no other report viewers are necessary set the email source to global email source send report to owner attach report as file Save and/or run report Check back in the morning to find the report did run but didn't send and the distribution settings are reverted back to "no email source".
Posted by kbruce about a year ago