In my previous post, I mentioned problems encountered with processing vuxml entries for horde4-imp. The problem was not specific to a particular port, but came to light because a vuln was registered for a port, and then later removed. I tracked down the cause to not clearing out statistics before processing the new vulnerabilities. This post shows the fix I have come up with.
The table which was not being updated properly was the ports_vulnerable table. This table looks like this:
freshports.org=# \d ports_vulnerable
Table "public.ports_vulnerable"
Column | Type | Modifiers
---------+---------+--------------------
port_id | integer | not null
current | integer | not null default 1
past | integer | not null default 0
Indexes:
"ports_vulnerable_pkey" PRIMARY KEY, btree (port_id)
Foreign-key constraints:
"$1" FOREIGN KEY (port_id) REFERENCES ports(id) ON UPDATE RESTRICT ON DELETE CASCADE
freshports.org=#
The issue was easily resolved with this patch:
--- scripts/process_vuxml.pl 2006/12/17 12:04:02 1.2
+++ scripts/process_vuxml.pl 2011/10/02 17:29:20 1.3
@@ -1,6 +1,6 @@
#!/usr/bin/perl -w
#
-# $Id: process_vuxml.pl,v 1.2 2006/12/17 12:04:02 dan Exp $
+# $Id: process_vuxml.pl,v 1.3 2011/10/02 17:29:20 dan Exp $
#
# Copyright (c) 2001-2004 DVL Software
#
@@ -36,7 +36,13 @@ sub EmptyVuXML($) {
$sql = "DELETE FROM vuxml";
$sth = $dbh->prepare($sql);
if (!$sth->execute()) {
- FreshPorts::Utilities::ReportError('warning', "Could not execute sql", 1);
+ FreshPorts::Utilities::ReportError('warning', "Could not execute sql: " . $sql, 1);
+ }
+
+ $sql = "DELETE FROM ports_vulnerable";
+ $sth = $dbh->prepare($sql);
+ if (!$sth->execute()) {
+ FreshPorts::Utilities::ReportError('warning', "Could not execute sql: " . $sql, 1);
}
}
With patch, I’m clearing out the table each time a new vuln.xml file is processed. However, that is not enough. Whenever a record is removed from this table, the cache for that port needs to be updated.
The following post will queue up a cache deletion request for each delete on the ports_vulnerable table:
--- database-schema/ri.txt 2008/09/09 13:02:38 1.45
+++ database-schema/ri.txt 2011/10/02 17:30:40 1.46
@@ -1,5 +1,5 @@
--
--- $Id: ri.txt,v 1.45 2008/09/09 13:02:38 dan Exp $
+-- $Id: ri.txt,v 1.46 2011/10/02 17:30:40 dan Exp $
--
-- Copyright (c) 1998-2007 DVL Software Limited
--
@@ -254,6 +254,42 @@ CREATE TRIGGER ports_clear_cache
EXECUTE PROCEDURE ports_clear_cache();
+CREATE OR REPLACE FUNCTION ports_vulnerable_delete_clear_cache() RETURNS TRIGGER AS $$
+ DECLARE
+ l_cache_clearing_ports_id int8;
+ l_port text;
+ l_category text;
+ BEGIN
+ IF TG_OP = 'DELETE' THEN
+ SELECT port_id
+ INTO l_cache_clearing_ports_id
+ FROM cache_clearing_ports
+ WHERE port_id = OLD.port_id;
+
+ IF NOT FOUND THEN
+ SELECT category, name
+ INTO l_category, l_port
+ FROM ports_all
+ WHERE id = OLD.port_id;
+ INSERT INTO cache_clearing_ports (port_id, category, port)
+ VALUES (OLD.port_id, l_category, l_port);
+ END IF;
+
+ NOTIFY port_updated;
+ END IF;
+
+ -- when a port changes, add an entry to the cache clearing table
+ RETURN OLD;
+ END
+$$ LANGUAGE 'plpgsql';
+
+ DROP TRIGGER ports_vulnerable_delete_clear_cache ON ports_vulnerable;
+CREATE TRIGGER ports_vulnerable_delete_clear_cache
+ AFTER DELETE on ports_vulnerable
+ FOR EACH ROW
+ EXECUTE PROCEDURE ports_vulnerable_delete_clear_cache();
+
+
I’ve tried this in dev, and in production, and staging. It seems to be the right thing to do.











