The "data collected" section of the help docs (https://insightagent.help.rapid7.com/docs/data-collected) indicate that the Windows registry is scanned by the agent. However I have servers that have TLS 1.0 turned on that only shows up in a network scan. What exactly is looked at currently in the registry and are there plans to increase the coverage? We prefer the agent as it doesn't require us to maintain accounts on all of our servers and allows us to scan inside isolated labs with a collector without opening up inbound firewall connections. Thank you.
Posted by John Byrne about an hour ago
I'm fairly confident this is because of my spaghetti of JOIN statements But can anyone point me in a direction of why I'm getting duplicates? The goal is to show all current vulnerabilities with hostname, age_in_days which my understanding based on the data model is this counts the days SINCE discovery of the vulnerability on the asset (from fact_asset_vulnerability_age i think is the best place for that, but converted using a case statement) and the rest is fluff really but i am getting crazy duplicates https://github.com/talltechy/InsightVM_SQL_Queries/blob/master/Vulnerability_Aging_Detail.pgsql -----------CODE FROM ABOVE LINK - CHECK ABOVE LINK FOR LATEST VERSION -------------- <code> WITH /* Created by Matt Wyen 4/15/19 Last Commited 4/24/19 https://help.rapid7.com/nexpose/en-us/warehouse/warehouse-schema.html */ /* custom_tags location_tags owner_tags criticality_tags remediations assets */ custom_tags AS ( SELECT asset_id, CSV(tag_name ORDER BY tag_name) AS custom_tags FROM dim_tag JOIN dim_tag_asset USING (tag_id) WHERE tag_type = 'CUSTOM' GROUP BY asset_id ), location_tags AS ( SELECT asset_id, CSV(tag_name ORDER BY tag_name) AS location_tags FROM dim_tag JOIN dim_tag_asset USING (tag_id) WHERE tag_type = 'LOCATION' GROUP BY asset_id ), owner_tags AS ( SELECT asset_id, CSV(tag_name ORDER BY tag_name) AS owner_tags FROM dim_tag JOIN dim_tag_asset USING (tag_id) WHERE tag_type = 'OWNER' GROUP BY asset_id ), criticality_tags AS ( SELECT asset_id, CSV(tag_name ORDER BY tag_name) AS criticality_tags FROM dim_tag JOIN dim_tag_asset USING (tag_id) WHERE tag_type = 'CRITICALITY' GROUP BY asset_id ), remediations AS ( SELECT DISTINCT fr.solution_id AS ultimate_soln_id, summary, fix, estimate, riskscore, dshs.solution_id AS solution_id FROM fact_remediation(10,'riskscore DESC') fr JOIN dim_solution ds USING (solution_id) JOIN dim_solution_highest_supercedence dshs ON (fr.solution_id = dshs.superceding_solution_id AND ds.solution_id = dshs.superceding_solution_id) ), assets AS ( SELECT DISTINCT asset_id, host_name, sites, ip_address, mac_address, last_assessed_for_vulnerabilities FROM dim_asset da GROUP BY asset_id, host_name, sites, ip_address, mac_address, last_assessed_for_vulnerabilities ) /* begin SELECT Need to get away from DISTINCT statements as it might be cuasing wonky */ SELECT DISTINCT asset_id AS "Asset ID" ,host_name AS "Hostname" ,ip_address AS "IP" ,dos.description AS "Operating System" ,to_char(round(fa.riskscore::numeric,0),'999G999G999') AS "Asset Risk" ,csv(DISTINCT dv.title) AS "Vulnerability Title" ,dv.nexpose_id AS "ID / CVE" ,dv.description AS "Vulnerability Description" ,to_char(round(dv.riskscore::numeric,0),'999G999G999') AS "Vulnerability Risk" ,fav.age_in_days AS "Age in Days" ,CASE WHEN fav.age_in_days < 30 THEN '<30' WHEN fav.age_in_days > 30 and fav.age_in_days <= 60 THEN '30-60' WHEN fav.age_in_days > 60 and fav.age_in_days <= 90 THEN '61-90' ELSE '90+' END as "Vulnerability Aging" ,fav.first_discovered AS "First Discovered" ,fav.most_recently_discovered AS "Most Recently Discovered" ,last_assessed_for_vulnerabilities AS "Last Assessed" ,summary AS "Solution" ,fix as "Fix" ,fa.critical_vulnerabilities AS "Total Asset Critical" ,fa.severe_vulnerabilities AS "Total Asset Severe" ,fa.moderate_vulnerabilities AS "Total Asset Moderate" ,fa.vulnerabilities AS "Total Asset Vulnerabilities" ,sites AS "Sites" ,ct.custom_tags AS "Custom Tags" ,lt.location_tags AS "Location Tags" ,ot.owner_tags AS "Owner Tags" ,crt.criticality_tags AS "Criticality Tags" /* end SELECT begin FROM / JOIN */ FROM remediations r JOIN dim_asset_vulnerability_solution dvs USING (solution_id) JOIN dim_vulnerability dv USING (vulnerability_id) JOIN assets USING (asset_id) JOIN dim_asset_operating_system USING (asset_id) JOIN dim_operating_system dos USING (operating_system_id) JOIN dim_tag_asset dta USING (asset_id) JOIN dim_tag dt ON dta.tag_id = dt.tag_id --fact_asset table is where total counts of vulnerabilities come from JOIN fact_asset fa USING (asset_id) --this is where the WHEN statement tags come from LEFT OUTER JOIN custom_tags ct USING (asset_id) LEFT OUTER JOIN location_tags lt USING (asset_id) LEFT OUTER JOIN owner_tags ot USING (asset_id) LEFT OUTER JOIN criticality_tags crt USING (asset_id) --this is where the age_in_days column is joined to calculate the CASE statement to display aging of vulnerabilities LEFT OUTER JOIN fact_asset_vulnerability_age fav USING (asset_id) --end FROM / JOIN --begin GROUP BY GROUP BY dv.nexpose_id ,dv.title ,dv.description ,summary ,fix ,to_char(round(dv.riskscore::numeric,0),'999G999G999') ,fav.age ,fav.age_in_days ,fav.first_discovered ,fav.most_recently_discovered ,dv.severity ,to_char(round(fa.riskscore::numeric,0),'999G999G999') ,host_name, dos.description ,ip_address ,asset_id ,last_assessed_for_vulnerabilities ,fa.critical_vulnerabilities ,fa.severe_vulnerabilities ,fa.moderate_vulnerabilities ,fa.vulnerabilities ,sites ,ct.custom_tags ,lt.location_tags ,ot.owner_tags ,crt.criticality_tags --end GROUP BY --begin ORDER BY ORDER BY "Hostname" DESC </code>
Posted by Matt Wyen about 2 hours ago
When I have run nexpose (InsightVM) for the first time, there is an ERROR output and scan stopped. How to resolve this issue? 2019-04-23T08:24:45 [ERROR] Entry drupal-CVE-2018-1000888.xml not found in /opt/rapid7/nexpose/plugins/java/1/DrupalScanner/1/vulns.jar. Please update to the latest product version. 2019-04-23T08:24:45 [ERROR] drupal-CVE-2018-1000888.xml not found. Please update to the latest product version. 2019-04-23T08:24:45 [WARN] [Scan ID: 1] Success callbacks not running due to error in task
Posted by Hubton a day ago
Hello, we ran a scan that came up with 2 vulnerabilities. They are both for the same issue, and the only different is that the one of the URLs has the server IP address and the other has the server name. We believe we resolved the issue but only the vulnerability with the server name in the URL has been removed from the scan, and the IP address URL still remains. What would be the difference between the two? I would assume they would be drop off together since the IP address resolves to that server name. Any suggestions would be greatly appreciated. Thank you.
Posted by James Hardiman 2 days ago
I have ruled out many vulnerabilities for a site but they keep showing up in the reports I generate, such as a Audit Report or Basic Vulnerability Check. For example, I have 'submitted and approved' all PHP related vulnerabilities have been ruled out and verified by the admin. They are listed in the asset's vulnerability exception list as well as the Administrator Vulnerability Exceptions page which states that the vulnerabilities have been approved by admin and the Exception Scope is for All Instances. Yet, they keep showing up in reports. I am using the Rapid7 InsightVM Free Trial. License is still active. The system is Windows 2012 R2 server, x64 based OS, and 16 GB RAM meets all the minimum requirements. have restarted the system and installed InsightVM as administrator with firewall and antivirus disabled. I have tried uninstalling it and re-installing it again. This is for one scan that has had 449 assets and the scan finished successfully with 6,903 vulnerabilities found. Any assistance would be appreciated.
Posted by Lee Zimmerman 2 days ago
I have ruled out many vulnerabilities for a site but they keep showing up in the reports I generate, such as a Audit Report or Basic Vulnerability Check. For example, I have 'submitted and approved' all PHP related vulnerabilities have been ruled out and verified by the admin. They are listed in the asset's vulnerability exception list as well as the Administrator Vulnerability Exceptions page which states that the vulnerabilities have been approved by admin and the Exception Scope is for All Instances. Yet, they keep showing up in reports. I am using the Rapid7 InsightVM Free Trial. License is still active. The system is Windows 2012 R2 server, x64 based OS, and 16 GB RAM meets all the minimum requirements. have restarted the system and installed InsightVM as administrator with firewall and antivirus disabled. I have tried uninstalling it and re-installing it again. This is for one scan that has had 449 assets and the scan finished successfully with 6,903 vulnerabilities found. Any assistance would be appriciated?
Posted by Lee Zimmerman 2 days ago
hi Team, I work for a private firm in India. I am facing an issue with Nexpose Vuln results. Here is the details I am getting report saying that there is office related patches are required for server, but when i log-in and check manually, there is on office installed in the specific workstation(only a sharepoint component is installed). And, as per the nexpose suggestions i am trying to install the suggested patch to resolve the vuln, but when i run the patch system says this patch is not applicable. i am not sure how to identify this or fix this vuln. Kindly suggest. Thank you.
Posted by Sharan 3 days ago
I've got an API and key am able to access the following endpoints: https://us.rest.logs.insight.rapid7.com https://us.rest.logs.insight.rapid7.com/management But anything else I try to run gets the following ``` <html> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"/> <title>Error 404 Not Found</title> </head> <body> <h2>HTTP ERROR 404</h2> <p>Problem accessing /management/organizations/plans. Reason: <pre> Not Found</pre> </p> </body> </html> ``` Is there any kind of authorization issue that could cause this? I'm not even able to get a list of logs.
Posted by Lucas Lowry 3 days ago
Does anyone know where to change the default From email address for Metasploit Pro reports? It is currently set to send from email@example.com, which is causing issues. Is there a way to change this either from the GUI or terminal?
Posted by Max 4 days ago
Can I move event sources to a second collector to offload some work? I have a second collector just installed in a remote location and I want it to capture the events local to it. I do not see any way to transfer the resources over. I do not want to move all of them, just the international ones to the new collector. Will it screw up existing records? Any way to do it or do I need to delete them and then add to the new one?
Posted by Kerry LeBlanc 5 days ago
Hello all, Has anyone figured out how to use the Solution Time metric attached to vuln defs in a useful manner? For instance, if vuln A and vuln B are both resolve in the same software version update, then it will not take ( (vuln A solution time) + (vuln B solution time) ), simply (vuln A|B solution time). Thanks, Matt
Posted by Matt Brown 6 days ago
Hello, I'd like to produce a KRI value that shows percentage of scanned hosts versus a list of known reachable subnets. For instance, this list of reachable subnets can be pulled from Solarwinds IPAM, or SNMP polling of Cisco gear, or simply a CSV/XML/JSON. Having a discovery scan will not suffice as there may be drift of in-scope subnets within the Nexpose/InsightVM system and real reachable subnets. Thanks! Matt
Posted by Matt Brown 6 days ago
Hi, a scan of many assets often goes like this: 1) discovery: found 400 assets (takes lets'say 10 minutes). 2) vulnerabilities scanning based on simultaneousity (not sure if this word exists in english) e.g. 20 assets / 20 processes goes well and 390 assets are done in let's say 2 hours. 3) But then, 10 assets originally discovered as alive at the beginning of the scan have been shut down before the scanner actually got to them and now scanner waits sooo long because of the timeouts. It might be acceptable at the very end of the scan but if this happens in the middle of the scan the scan might not even finish in a reasonable time. If simultaneousity is set to 20 assets / 20 processes and we get 150 assets dead (originally alive) during the scan then the scan will get so many delays caused by timeouts that it might not even finish (if we are scanning user workstations we need to finish before the end of working hours) and the assets that are still alive are not even scanned even thou they could be because the scanner cannot get to them in time. Is there any workaround to check if assets originally discovered at the beginning of the scan as alive are still alive during the scan (before the scanner actually starts scanning them)?
Posted by Jiri Dohnal 6 days ago
Hello, What is the best method for decommissioning assets that haven't been reachable for N days? I've created an site/asset group that contains these assets, but am not sure how to expire/decomm these assets. Or rather... I have no idea what I'm doing, and would rather understand best practices. Thanks, Matt
Posted by Matt Brown 6 days ago
I was wondering if I could get insight into what should be in the insightvm VARFILE response document for the linux insightvm installer. I am looking into auto deploying agents and would like them to self-configure to talk to my Console server. I do not want to use the AWS AMI Pre-authorized scanner as I want console -> scanner traffic only. Example: ``` Starting Installer ... The following command line options are available: -varfile [file] Use a response file -c Run in console mode -q Run in unattended mode -dir [directory] In unattended mode, set the installation directory -overwrite In unattended mode, overwrite all files -splash [title] In unattended mode, show a progress window -Dname=value Set system properties -h Show this help ```
Posted by ekelson 6 days ago
What is the best method to manage duplicate assets? A client of mine currently has Asset Linking enabled, and is referring to the following SQL Query report to find duplicate assets: SELECT sa.host_name,COUNT(*) FROM dim_asset da GROUP BY da.host_name HAVING COUNT(*) > 1 All assets returned simply have multiple IPs matching their `COUNT(*)`. Therefore, this is in invalid method to find duplicate assets. Is there a better method to discover duplicate assets? Thanks, Matt
Posted by Matt Brown 6 days ago
Hi, I have a site populated by a dynamic asset group. The DAG filter is set like "Vulnerabilities assessed - earlier than - 30 days" which should spread the load into the whole month by scanning only a part of the assets every day and also each asset only once a month (30 days). So far so good. However, the DAG doesn't return the assets which have NEVER had vulnerabilities assessed (e.g. fresh new assets or assets scanned only in a discovery scan etc.). I of course also need to include these assets in the daily load - otherwise I will never scan them for vulnerabilities. I would need a DAG filter which would produce an asset list including the assets whose vulnerabilities haven't been assessed in last 30 days plus the assets whose vulnerabilities have been NEVER assessed. Would anyone know how to achieve that? THX.
Posted by Jiri Dohnal 7 days ago
Hi When nexpose upgraded to insightvm , Following information is transmitted to insightvm on Rapid7 cloud. By Transmitting below information to cloud will it violate any compliance or audit and do we need to take any customer consent to before transmitting to Insightvm. Asset information Asset groups Asset owners Vulnerabilities Vulnerability exceptions Tags Scan Engine information InsightVM Console information InsightVM does not transmit user or service credentials of any kind to the Insight platform. https://nexpose.help.rapid7.com/v1.0/docs/configure-communications-with-the-insight-platform Thanks in Advance. Regards/- Charan
Posted by charan teja 8 days ago