Dan Langille

I've been playing with Open Source software, starting with FreeBSD, since New Zealand Post installed DSL on my street in 1998. From there, I started writing at The FreeBSD Diary, moving my work here after I discovered WordPress. Along the way, I started the BSDCan and PGCon conferences. I slowly moved from software development into full time systems administration and now work for very-well known company who has been a big force in the security industry.

Sep 272019

I was just creating a new jail for working on git & FreshPorts. I was intrigued to see that iocage uses send | receive to create the new jail:

[dan@slocum:~] $ ps auwwx | grep iocage
root      64166    3.7  0.0   12788    4036  1  D+   21:16         0:06.10 zfs send system/iocage/releases/12.0-RELEASE/root@git-dev
root      64167    2.8  0.0   12752    4036  1  S+   21:16         0:03.60 zfs receive -F system/iocage/jails/git-dev/root
root      63910    0.0  0.0   16480    7384  1  I+   21:16         0:00.01 sudo iocage create -r 12.0-RELEASE --thickjail --name git-dev
root      63911    0.0  0.0   53344   42484  1  I+   21:16         0:01.01 /usr/local/bin/python3.6 /usr/local/bin/iocage create -r 12.0-RELEASE --thickjail --name git-dev
dan       67954    0.0  0.0   11288    2732  3  S+   21:18         0:00.00 grep iocage
[dan@slocum:~] $ 

More later, after I get this jail configured.

Edit: 2019-09-28

From Twitter:

Something is being copied, is that a cached version of the jail template?

The answer is a local copy of FreeBSD 12.0-RELEASE:

[dan@slocum:~] $ zfs list -r system/iocage/releases
NAME                                       USED  AVAIL  REFER  MOUNTPOINT
system/iocage/releases                    3.15G  15.9T   176K  /iocage/releases
system/iocage/releases/11.2-RELEASE       1.44G  15.9T   176K  /iocage/releases/11.2-RELEASE
system/iocage/releases/11.2-RELEASE/root  1.44G  15.9T  1.44G  /iocage/releases/11.2-RELEASE/root
system/iocage/releases/12.0-RELEASE       1.71G  15.9T   176K  /iocage/releases/12.0-RELEASE
system/iocage/releases/12.0-RELEASE/root  1.71G  15.9T  1.71G  /iocage/releases/12.0-RELEASE/root
[dan@slocum:~] $ 

What’s in there?

[dan@slocum:~] $ ls /iocage/releases/12.0-RELEASE/root
COPYRIGHT boot      etc       libexec   mnt       proc      root      sys       usr
bin       dev       lib       media     net       rescue    sbin      tmp       var
[dan@slocum:~] $ 
Sep 222019

We have the first commit process via git into FreshPorts. Details in this git comment.

Work remaining:

  1. check out that commit into the working copy of the files
  2. run make -V on the working copy to get the refreshed values for the port[s] affected by this commit

The 2nd part – very little code change.

The 1st part is just playing with git.

My thanks to Sergey Kozlov for his code which creates the XML FreshPorts needs for commit processing. That has been a great timesavings to me.

Sep 182019

I want to move FreshPorts towards using commit hooks and away from depending upon incoming emails for processing new commits.

Much of the following came from a recent Twitter post.

You might think: why are we using emails? Why? Because we can. They were the easiest and most simple approach. It is a time-proven solution. Look at https://docs.freshports.org/ and you can see the original ideas from 2001. That is over 18 years of providing data.

If email is so good, why stop?

Because we can.

And we won’t stop using email.

Email will stay around as a fall-back position. Commit hooks are tighter dependency upon a third party and requires close cooperation. Should that relationship sour, the cooperation may terminate.

If web-hooks proceed, email processing will be modified to introduce an N-minute delay. After leaving the N-minute queue, the mail will be:

  • ignored if the commit has already been processed
  • processed if the commit is not in the database

How is a commit identified

Email processing is based upon the Message-Id contained within the database. Duplicates are ignored.

I am not sure if we also check the subversion revision number. That might be wise. There is an index, but it is not unique.

If we move to commit-hooks, message-id will not be available. We will have to change to relying upon the revision number or, in the case of git, the commit hash.


  • add unique ID to commit_log.svn_revision
  • remove not null constraint on commit_log.message_id
  • add commit_log.commit_hash with a unique index

Commit processing

Regardless of how we get notified of a new commit, we must be able to put our local copy of the repo into the state as of a given commit.

For subversion, we do this:

svn up -r REVISION

After this, various commands, such as make -V, are run to extract the necessary values from the ports tree (as of the commit). This information includes PORTVERSION, PORTREVISION, etc. You can see why is it vital to have everything in our ports tree reflect the repo as of that particular commit.

For git, it is it similar:

git checkout HASH

The same scripts, as describe above, would be run.

Commit hooks

These are the assumptions for a commit hook:

  1. the hook gets triggered exactly once per commit
  2. the hook is fast, so as not to slow down commits

In order to be fast, the basic information has to be passed along to another daemon, which then puts it into a queue, which is then processed by another daemon. This queue must be persistent.

I am using hare and hared here as examples only because I am familiar with them. They won’t actually what I need, but if I was to fork them and modify them for this specific task, I think they would do the job rather well.

My initial thoughts are:

  1. The hook invokes something like hare (see also sysutils/hare) which sends a udp packet to something else. The packet contains the commit revision number (if subversion) or hash (if git).
  2. The udp is received by something like hared (same link as above for hare, but available via sysutils/py-hared).
  3. hared then adds the data to a queue. What type of queue and where it is located is for later design.

Commit iteration

When processing email, the looping through the email is your iteration. When you have no email, you need something to iterate through.

git commit iteration

I think this is the command we want to use when iterating through git commits:

git rev-list ..HEAD

Where is the hash of our most recently processed commit. Most recently is not necessarily the last one we committed. It is the commit with the most recent timestamp. Here is an example:

$ git rev-list ee38cccad8f76b807206165324e7bf771aa981dc..HEAD

Using the above, perhaps the logic for processing commits will be:

detect a new commit
git pull
use git rev-list to get list of new commits
for i = oldest new commit to newest new commit {
  git checkout a commit

subversion commit iteration

With subversion we have a revision id, which is an integer.

The repo can be queried for their highest commit via:

svn log -r head

With that revision number, the code to process the commits is

for i = LastCommitProcess + 1; i <= LatestCommit; i++ {
  svn up -r $i
  process that commit

How do we handle gaps in the subversion revision sequence? If we have commits, 5, 7, and 8, where is commit 6? How do we note that commit 6 does not exist and that we need to get it? What if the repo has no commit 6?

Current status

Extracting the required data from the repo instead of the email should be straight forward. It must still be tested and verified.

Iterating the commits is still something which needs to be proven that it works. Hopefully that can start soon.

Sep 032019

I’m trying to think of a list of things that FreshPorts can do which might be useful.

I can think of these:

  • provides example dependency line. e.g. p5-XML-RSS>0:textproc/p5-XML-RSS
  • list of dependencies for a port
  • list of ports depending upon this port
  • Default configuration options
  • what packages install a given file (e.g. bin/unzip)
  • what ports does this person maintain?
  • which Makefiles contain a reference to bunzip?
  • search results can be plain-text consisting of a list of foo/bar ports
  • The Maximum Effort checkbox on the search page does nothing.
  • Committers can be notified of sanity test failures after the commit
  • Find a commit, any commit, based on SVN revision number, e.g. : https://www.freshports.org/commit.php?revision=352332

Any more ideas?

Sep 022019

When the time comes, and the FreeBSD project is using git, there will be work to be done on FreshPorts. If the commit emails are similar to those under cvs and svn, it should be straight forward to parse the email and convert it to XML.

Once the data is in XML, the commit can be loaded into FreshPorts. The commit is the basis for most other data.

I am not sure of the work process after that. I think it will be as simple as:

  1. git pull
  2. git checkout HASH

where HASH is the hash value associated with the commit in question. I’m assuming the commit hash will be in the commit email.

Processing commits in order

One longstanding FreshPorts issue (which I notice is not recorded): If commits are processed out of order, things can go wrong.

FreshPorts depends upon the email arriving in order in which the commits occurred. There is no guarantee of this. FreshPorts processes the emails in the order they arrive. It achieves this by putting each email into a queue and processing the queue in order.

This is the ideal workflow:

  1. FreshPorts gets a notice: Hey, there’s been a commit
  2. FreshPorts looks to see how many new commits there and processes each one in order

Step 1 can be as easy as querying the repo manually every minute, or a hook on the repo which taps FreshPorts.

Step 2 Might be challenging, but I am not sure. I don’t know how to say: list me all commits after X. I don’t know how to detect missed commits.

List git commits

Here is a partial list of git commits:

[dan@dev-nginx01:~/www] $ git log
commit 6e21a5fd3a7eeea3ada9896b1b5657a6ba121fd8 (HEAD -> master, origin/master, origin/HEAD)
Author: Dan Langille <dan@langille.org>
Date:   Fri Aug 23 15:24:51 2019 +0000

    Simplify the deleted ports section of "This port is required by"
    Remove the <dl><dd><dt> stuff and keep it straight forward.
    Move the "Collapse this list" into the final <li> of the list.

commit 11950339914066ea9298db4fbccc421a1d414108
Author: Dan Langille <dan@langille.org>
Date:   Fri Aug 23 15:12:29 2019 +0000

    Fix display of long lists
    Fixes #126
    While here, fix the #hidden part of the "Expand this list (N items / X hidden)" message.

commit <strong>5f0c06c21cb8be3136d7562e12033d39d963d8b3</strong>
Author: Dan Langille <dan@langille.org>
Date:   Fri Aug 23 12:59:35 2019 +0000

    Improve links to validation URLS
    * move to https
    * show source on HTML link
    * add referer to CSS link

commit 20c2f1d6619e968db56f42b6632d4ddf6a8d00bb (tag: 1.35)
Author: Dan Langille <dan@langille.org>
Date:   Tue Aug 20 16:19:47 2019 +0000

    Under 'This port is required by:' format deleted ports better
    Fixes #125

commit cc188d6ecde7a19c7317ca5477495e1618d70fe9
Author: Dan Langille <dan@langille.org>
Date:   Fri Aug 16 19:04:09 2019 +0000

    Add more constants:
    * FRESHPORTS_LOG_CACHE_ACTIVITY - log all caching activity
    * PKG_MESSAGE_UCL               - process pkg-message as UCL content

commit 309b10946785ce4254e71b9ebbf116c98095fa53 (tag: 1.34.2)
Author: Dan Langille <dan@langille.org>
Date:   Fri Aug 16 18:32:59 2019 +0000

    Comment out some debug stuff.
    Remove fseek, not required.

The issue: If the last commit processed by FreshPorts is 5f0c06c21cb8be3136d7562e12033d39d963d8b3, how can I get of list of all commit since then?

Google tells me:

[dan@dev-nginx01:~/www] $ git log <strong>5f0c06c21cb8be3136d7562e12033d39d963d8b3</strong>..
commit 6e21a5fd3a7eeea3ada9896b1b5657a6ba121fd8 (HEAD -> master, origin/master, origin/HEAD)
Author: Dan Langille <dan@langille.org>
Date:   Fri Aug 23 15:24:51 2019 +0000

    Simplify the deleted ports section of "This port is required by"
    Remove the <dl><dd><dt> stuff and keep it straight forward.
    Move the "Collapse this list" into the final <li> of the list.

commit 11950339914066ea9298db4fbccc421a1d414108
Author: Dan Langille <dan@langille.org>
Date:   Fri Aug 23 15:12:29 2019 +0000

    Fix display of long lists
    Fixes #126
    While here, fix the #hidden part of the "Expand this list (N items / X hidden)" message.
[dan@dev-nginx01:~/www] $ 

Work is required

Regardless of when git arrives, there will be work to be done. How much work, I don’t know yet.

Jul 132019

Today I updated the test website with two changes:

  1. use of dd, dt, and dl tags in the details section of the ports page
  2. Three new graphs:
    1. doc
    2. ports
    3. src

The tags part was all the result of me reading up on them and concluding they could be useful.

The graphs were swills’ fault. They took about an hour to do, an most of that was figuring out the required changes.

I started with www/graphs2.php

Also involved is www/generate_content.php and www/graphs.js, but you can see the whole commit if you want.

Not included in the code are some SQL queries, which were saved in the issue.


May 252019

I’m writing this post just to keep things straight in my head so I can decide how best to resolve this issue.

FreshPorts uses /var/db/freshports/cache/spooling on both the ingress jail and the nginx jail.

The nginx jail uses it for caching content. Page details are first spooled into /var/db/freshports/cache/spooling before moving it to /var/db/freshports/cache/ports.

The ingress jail uses this for refreshing various cached items.

This directory is configured by the FreshPorts-Scripts package, which is installed in both jails.

The problem: this directory is created chown freshports:freshports but it needs to be chown www:freshports in the jail.

My first question is: why does the nginx jail need the FreshPorts-Scripts package? It contains ingress related scripts. By that, I mean scripts related to incoming commits and the code to get them into the FreshPorts database.

How does it get into the jail?

[dan@x8dtu-nginx01:~] $ sudo pkg delete FreshPorts-Scripts
Checking integrity... done (0 conflicting)
Deinstallation has been requested for the following 3 packages (of 0 packages in the universe):

Installed packages to be REMOVED:

Number of packages to be removed: 3

The operation will free 4 MiB.

Proceed with deinstalling packages? [y/N]: n

Two other ports require it.

Ahh, yes, the fp-listen daemon needs the scripts:

[dan@x8dtu-nginx01:~] $ ps auwwx | grep fp-listen
root       35775  0.0  0.0   4244  1944  -  IJ   17:58   0:00.00 supervise fp-listen
freshports 35777  0.0  0.0  21076 16392  -  SJ   17:58   0:00.43 /usr/local/bin/python2.7 /usr/local/lib/python2.7/site-packages/fp-listen/fp-listen.pyc
dan        74034  0.0  0.0   6660  2532  2  S+J  18:57   0:00.00 grep fp-listen
[dan@x8dtu-nginx01:~] $ 

That’s going to be running on nginx regardless. That daemon listens to the PostgreSQL database for updates and clears the relevant portions of on-disk cache.

At first, I was trying to figure out what was installing the www user on the nginx jail. Then I realized, with help, that the www user is installed by default after having been added back in 2001.

It was originally added in 2001.

I see a solution:

  • chown www:freshports
  • chmod 775

That translates to this entry in the pkg-plist file:

@dir(www,freshports,775) %%FP_DATADIR%%/cache/spooling

That seems to fix the rename errors I was seeing:

2019/05/25 18:32:33 [error] 35875#100912: *4277 FastCGI sent in stderr: "PHP message: PHP Warning:  
ead.PageSize100.PageNum1.html): Operation not permitted in /usr/local/www/freshports/classes/cache.php on line 83" while reading 
response header from upstream, client:, server: www.freshports.org, request: "GET /dns/odsclient HTTP/1.1", upstream: 
"fastcgi://unix:/var/run/php-fpm.sock:", host: "www.freshports.org"

Thanks for coming to my TED talk.

Jan 272019

Yesterday I copied data from the old production server to the new production server. One thing I missed, but did think about at the time, was updating the sequence used by the table in question. Looking at the table definition:

freshports.org=# \d report_log
                                          Table "public.report_log"
    Column    |           Type           | Collation | Nullable |                  Default                   
 id           | integer                  |           | not null | nextval('report_log_id_seq'::regclass)
 report_id    | integer                  |           | not null | 
 frequency_id | integer                  |           |          | 
 report_date  | timestamp with time zone |           | not null | ('now'::text)::timestamp(6) with time zone
 email_count  | integer                  |           | not null | 
 commit_count | integer                  |           | not null | 
 port_count   | integer                  |           | not null | 
    "report_log_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
    "$1" FOREIGN KEY (frequency_id) REFERENCES report_frequency(id) ON UPDATE CASCADE ON DELETE CASCADE


The report_log_id_seq value will be wrong. When the reports run, they will use values for id which are already present in the table. To confirm, I ran this test:

freshports.org=# BEGIN;
freshports.org=# INSERT INTO report_log (report_id, frequency_id, email_count, commit_count, port_count) VALUES (2, 4, 0, 0, 0);
ERROR:  duplicate key value violates unique constraint "report_log_pkey"
DETAIL:  Key (id)=(19074) already exists.
freshports.org=# ROLLBACK;
freshports.org=# SELECT max(id) FROM report_log;
(1 row)


Historically, I have done this with setval but today I will try ALTER SEQUENCE.

freshports.org=# BEGIN; ALTER SEQUENCE report_log_id_seq RESTART WITH 20145;
freshports.org=# INSERT INTO report_log (report_id, frequency_id, email_count, commit_count, port_count) VALUES (2, 4, 0, 0, 0);
freshports.org=# ROLLBACK;

That worked, so I rolled it back and this time I’ll save the changes without inserting data;

freshports.org=# BEGIN; ALTER SEQUENCE report_log_id_seq RESTART WITH 20145;
freshports.org=# COMMIT;

I remembered this issue while sorting out a configuration & code error this morning.

Jan 272019

After enabling the report notifications yesterday, they failed to go out. Why? A hardcoded hostname in a Perl module.

Here are the errors I found this morning.

from=’FreshPorts Watch Daemon <FreshPorts-Watch@FreshPorts.org>’ to=’dvl@example.org’ subject=’FreshPorts daily new ports’
could not open Email::Sender. from=’FreshPorts Watch Daemon <FreshPorts-Watch@FreshPorts.org>’ to=’dvl@example.org’ subject=’FreshPorts daily new ports’ errorcode=’unable to establish SMTP connection to cliff.int.example.net port 25
Trace begun at /usr/local/lib/perl5/site_perl/Email/Sender/Transport/SMTP.pm line 193
Email::Sender::Transport::SMTP::_throw(‘Email::Sender::Transport::SMTP=HASH(0x806f2ea68)’, ‘unable to establish SMTP connection to cliff.int.example.net port 25′) called at /usr/local/lib/perl5/site_perl/Email/Sender/Transport/SMTP.pm line 143
Email::Sender::Transport::SMTP::_smtp_client(‘Email::Sender::Transport::SMTP=HASH(0x806f2ea68)’) called at /usr/local/lib/perl5/site_perl/Email/Sender/Transport/SMTP.pm line 202

The interesting part, to me, was the host it was trying to contact: cliff.int.example.net

That is an internal host, here in my home network. Do I have my configuration wrong?

Let’s check:

$ sudo grep cliff -r /usr/local/etc/freshports/*
$ sudo grep cliff -r /usr/local/libexec/freshports/*
$ sudo grep -r cliff /usr/local/lib/perl5/site_perl/FreshPorts/*
/usr/local/lib/perl5/site_perl/FreshPorts/email.pm:		host => 'cliff.int.example.net', # $FreshPorts::Config::email_server,

Oh, there it is, in the email module, along with the commented out value it should be using.

I suspect I used that for testing at home, then checked it in without seeing what was there.

Fixing it

The host in question is a jail without any public IP addresses. Other jails communicate with this jail via a localhost address:

lo1: flags=8049 metric 0 mtu 16384
	inet netmask 0xffffffff 
	groups: lo 

Note that this is lo1, not lo0. It is a clone of lo0. Note also the address in use. I like using addresses in the block because it is assigned for use as the Internet host loopback address.

The configuration I had was:

$ sudo grep FreshPorts::Config::email_server *
config.pm:$FreshPorts::Config::email_server			= '';

I modified the code in production (yes, testing in prod we are) to use the supplied configuration value:

$ cd /usr/local/lib/perl5/site_perl/FreshPorts
$ grep email_server email.pm 
		host => $FreshPorts::Config::email_server,

I tried the email testing code, specifically designed to test sending of email. I wonder why I had not done this before.

$ cd /usr/local/libexec/freshports]
$ echo ./test-sending-email.pl | sudo su -fm freshports
from='FreshPorts Watch Daemon ' to='dan@langille.org'
subject='FreshPorts test email - x8dtu-ingress01.int.unixathome.org'
could not open Email::Sender.  from='FreshPorts Watch Daemon ' 
to='dan@langille.org' subject='FreshPorts test email - x8dtu-ingress01.int.unixathome.org' errorcode='can't 
STARTTLS: 2.0.0 Ready to start TLS

What does the mail log say:

Jan 27 15:04:17 x8dtu-ingress01 postfix/smtpd[14533]: connect from unknown[]
Jan 27 15:04:17 x8dtu-ingress01 postfix/smtpd[14533]: SSL_accept error from unknown[]: 0
Jan 27 15:04:17 x8dtu-ingress01 postfix/smtpd[14533]: warning: TLS library problem: error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:s3_pkt.c:1498:SSL alert number 80:
Jan 27 15:04:17 x8dtu-ingress01 postfix/smtpd[14533]: lost connection after STARTTLS from unknown[]

Umm, what key is being used by Postfix?

$ postconf -n | grep key
smtp_tls_key_file = /usr/local/etc/ssl/x8dtu-ingress01.int.unixathome.org.key
smtpd_tls_key_file = /usr/local/etc/ssl/x8dtu-ingress01.int.unixathome.org.key

Ahh, I cannot specify the IP address, I must use the hostname, otherwise TLS will fail based on the certificate.

I changed the entry in the configuration file:

$ cd /usr/local/etc/freshports/
$ sudo grep email_server *
config.pm:$FreshPorts::Config::email_server			= 'x8dtu-ingress01.int.unixathome.org';

And added this entry to the hosts file:

$ grep x8dtu /etc/hosts	x8dtu-ingress01.int.unixathome.org

The entry is required because this hostname is not present in DNS.

Now the email goes out:

[dan@x8dtu-ingress01:/usr/local/libexec/freshports] $ echo ./test-sending-email.pl | sudo su -fm freshports
from='FreshPorts Watch Daemon ' to='dan@example.org' subject='FreshPorts test email - x8dtu-ingress01.int.unixathome.org'
finish 2019-01-27 15:12:17

I then went back to my development server and fed those code changes back into the repository. Testing in dev showed a problem with my Let’s Encrypt certificate which was not being refreshed on this host. It was being renewed, but not being installed.

Further tests in test and stage resulted in changes to $FreshPorts::Config::email_server on those hosts, because localhost and IP addresses where in use. They were changed to hostnames.

Eventually, the code was installed in production. It seems I spent more time getting things working in dev, test, and staging than it did to fix production.

Let’s see if the report notifications go out tonight.

For the record, I did check the report_log_latest tables and confirmed that the latest entries were still back in November. Thus, the reports to be compiled tonight will be cover the correct period.

Jan 262019

Earlier today I copied data from the old server (supernews) to the new server (x8dtu). Now that the database has the correct information regarding when reports were last sent out, we can begin to enable those reports.

The work has already been done to move the reports from cronjobs into periodic scripts. For our purposes, three new periodic categories have been added:

  • everythreeminutes
  • hourly
  • fortnightly

I also created the corresponding directories:

[dan@x8dtu-ingress01:/usr/local/etc/periodic] $ ls
daily             fortnightly       monthly           weekly
everythreeminutes hourly            security

Which contains the FreshPorts scripts, installed via pkg:

[dan@x8dtu-ingress01:/usr/local/etc/periodic] $ ls everythreeminutes hourly fortnightly

310.send-report-notices-fortnightly   320.send-report-new-ports-fortnightly

120.fp_test_master_port_make       180.fp_stats_hourly                260.fp_refresh_various_cache_items
140.fp_test_master_port_db         240.fp_missing_port_categories
[dan@x8dtu-ingress01:/usr/local/etc/periodic] $

Instructing periodic to run those scripts looks something like this:

$ grep periodic /etc/crontab 
*/3	*	*	*	*	root	periodic everythreeminutes
0	*	*	*	*	root	periodic hourly
1	3	*	*	*	root	periodic daily
15	4	*	*	6	root	periodic weekly
20	3	9,23	*	*	root	periodic fortnightly
30	5	1	*	*	root	periodic monthly

I knew that the script which does the reporting is report-notification.pl so I went looking to see what is using it:

[dan@x8dtu-ingress01:/usr/local/etc/periodic] $ grep -r report-notification.pl *
daily/310.send-report-notices-daily:	echo "cd $fp_scripts_dir && /usr/local/bin/perl report-notification.pl D" | su -fm $fp_freshports_user 2>&1 >> ${DIRLOG}/report-notification.daily || rc=3
fortnightly/310.send-report-notices-fortnightly:	echo "cd $fp_scripts_dir && /usr/local/bin/perl report-notification.pl F" | su -fm $fp_freshports_user 2>&1 >> ${DIRLOG}/report-notification.fortnightly || rc=3
monthly/310.send-report-notices-monthly:	echo "cd $fp_scripts_dir && /usr/local/bin/perl report-notification.pl M" | su -fm $fp_freshports_user 2>&1 >> ${DIRLOG}/report-notification.monthly || rc=3
weekly/310.send-report-notices-weekly:	echo "cd $fp_scripts_dir && /usr/local/bin/perl report-notification.pl W" | su -fm $fp_freshports_user 2>&1 >> ${DIRLOG}/report-notification.weekly || rc=3
[dan@x8dtu-ingress01:/usr/local/etc/periodic] $

Ahh, yes, it does look like I’ve already done this work for each reporting time period.

Next, let’s see what knobs we must enable.

[dan@x8dtu-ingress01:/usr/local/etc/periodic] $ grep enable `grep -rl report-notification.pl *`
daily/310.send-report-notices-daily:case "$fp_send_report_notices_daily_enable" in
fortnightly/310.send-report-notices-fortnightly:case "$fp_send_report_notices_fortnightly_enable" in
monthly/310.send-report-notices-monthly:case "$fp_send_report_notices_monthly_enable" in
weekly/310.send-report-notices-weekly:case "$fp_send_report_notices_weekly_enable" in

Those are the four flags I have to enable to get this working.

  1. fp_send_report_notices_daily_enable
  2. fp_send_report_notices_fortnightly_enable
  3. fp_send_report_notices_monthly_enable
  4. fp_send_report_notices_weekly_enable

Let’s check the configuration file:

$ grep report /etc/periodic.conf
# reports

With a quick sudoedit, I enabled all of those entries. I could have also used sysrc, but I figured sudoedit would be fine.

Now we wait.