One idea I had for a site that would hopefully bring people together is a place where we can show actual concept and technologies that are being used instead of just using "theory" to solve real problems and create real solutions.
What better idea than starting with this site!
There are so many tecnniques and techologies that I take for granted that sometimes it's hard to ecompass all of that. So, what I hope to do is a post each technique or technology used in this project.
If you have a Proof of Concept (POC) that you'd like to share, please start your own thread and let us in on your skills and techniques!
Brad
The Field Exit site extensively uses Server Side Includes (SSIs) to display static and dynamic content. From the headers, to the footers, to displaying groups, forums and topics we really try to use SSI wherever we can.
The first example you'll see is that every page has (or should have!) a footer with copyright information. This is done by using a static SSI at the bottom of each page, like so:
<!--#include virtual="/ssi/bottom.html" -->
This SSI statement is interpreted by the HTTP Web Server. The contents of "/ssi/bottom.html" are then included in place of this SSI statement.
The contents of the /ssi/bottom.html file are as follows:
<!--begin bottom-->
<br />
<hr />
<table width="100%">
<tr>
<td class="small1" nowrap="nowrap">© Copyright 1983-2014 BVSTools
<br />GreenBoard(v2) Powered by the
<a href="http://www.erpgsdk.com" target="_blank">eRPG SDK</a>,
<a href="https://bvstools.com/mailtool.html" target="_blank">MAILTOOL Plus!</a>,
<a href="http://jquery.com" target="_blank">jQuery</a>,
<a href="http://jqueryui.com/" target="_blank">jQuery UI</a>,
<a href="http://ckeditor.com/" target="_blank">CKEditor</a>
<br /></td>
</tr>
</table>
<!--end bottom-->
Another way we use SSI is for the header, title and meta tags used by crawlers to index sites. Because we want each page to be different, we used a CGI (in this case eRPG) program so the information generated can be dynamic.
First, we set up a physical file as such:
File Name . . . . PAGEPF
Library . . . . GREENBOARD
Format Descr . .
Format Name . . . RPAGE
File Type . . . . PF Unique Keys - N
Field Name FMT Start Lngth Dec Key Field Description
PAGEPATH A 1 128 Page Path
TITLE A 129 1024 Page Title
METADESC A 1153 1024 Meta Description
Next, we added a SSI directive to each of our pages.
<!--#include virtual="/forum/top?path=/index.html" -->
As you can see, instead of a static file, this SSI directive is calling an CGI program and passing in the page name (in this case, /index.html).
There is one CGI program that is a little different, and that's the one that displays the message. The SSI directive for this program (DISPLAY) looks like this:
<!--#include virtual="/forum/top?path=/forum/display&subject=/%subject%/" -->
Obviously we are replacing the /%subject%/ string with the actual subject of the message.
Our CGI program will read the value of the path and subject variables. If the subject is blank, then go to the PAGEPF file and try to find the Title and Meta Description for that path. If a path isn't found out program will output a preset text for the text and description. If there is a subject passed in, then use that for the title and meta description.
Here is a snippet from the template used by the CGI program "top"
<html>
<head>
<title>/%title%/</title>
<meta name="description" content="/%metadesc%/">
...
Now, in our CGI program we simply replace these values with the values from the PAGEPF file (or the subject, if it's passed in):
#startup();
inPagePath = #getData('path':1:LOWER);
inSubject = #getData('subject');
#writeTemplate('stdhtmlheader.erpg');
#loadTemplate('top.erpg');
if (inSubject <> ' ');
#replaceData('/%title%/':%trim(inSubject) + ' - FieldExit.com');
#replaceData('/%metadesc%/':%trim(inSubject) + ' - FieldExit.com');
else;
exec SQL
select 1, TITLE, METADESC
into :i, :TITLE, :METADESC
from PAGEPF
where PAGEPATH = :inPagePath;
if (i = 1);
#replaceData('/%title%/':TITLE);
#replaceData('/%metadesc%/':METADESC);
else;
#replaceData('/%title%/':
'Field Exit - IBM i (System i, iSeries, AS400) ' +
'Blog, Forum and Community');
#replaceData('/%metadesc%/':
'Field Exit - IBM i (System i, iSeries, AS400) ' +
'Blog, Forum and Community');
endif;
endif;
#writeSection();
#cleanup();
*INLR = *on;
Pretty simple program, especially with the help of the eRPG SDK. Using a product like CGIDEV2 would be just as easy.
There are many other uses of SSI in this web application. We will hopefully cover those in more detail in the future.
Brad
Because we only have one public IP address to run multiple websites on our IBM i, we need to do what is called Reverse Proxy with the Apache server.
What this allows us to do is point requests to other domains to separate local IP addresses inside our network.
Think of it this way. Each site (ie, www.bvstools.com, www.fieldexit.com, etc) will point to one external IP address, but each site will have their own internal IP address. As you probably know, the IBM i is unique in that you can create multiple IP Interfaces (or IP addresses) for just one NIC. Pretty cool if you ask me!
Each site will also have their own HTTP server instance and configuration. But one of the servers has to be the "gate keeper". In this case we have one instanced named PROXY that does all the routing of requests. That's why for years I've always mentioned that when you create IBM i web server instances to always specify a specific IP address and port (or ports). If you don't, the server will bind to any and all interfaces set up on your system, and you'll really only be able to run one instance (no fun and a nightmare to configure).
Within the configuration for our PROXY server instance have the following entries at the top of the configuration file to allow the reverse proxy to work:
LoadModule proxy_module /QSYS.LIB/QHTTPSVR.LIB/QZSRCORE.SRVPGM
LoadModule proxy_http_module /QSYS.LIB/QHTTPSVR.LIB/QZSRCORE.SRVPGM
LoadModule proxy_connect_module /QSYS.LIB/QHTTPSVR.LIB/QZSRCORE.SRVPGM
LoadModule proxy_ftp_module /QSYS.LIB/QHTTPSVR.LIB/QZSRCORE.SRVPGM
These entries tell the apache server to load specific applications to that the proxy interfaces will function.
The reverse proxy entries look like the following (for each host/site there is a VirtualHost container):
<VirtualHost xx.xx.xx.16:80>
ServerName fieldexit.com
ServerAlias *.fieldexit.com
ProxyPreserveHost On
RewriteEngine On
RewriteRule ^(.*)$ http://xx.xx.xx.23$1 [P]
</VirtualHost>
So, what happens here is a request comes from the internet to www.fieldexit.com. The DNS points this to the external IP address of the website.
The request hits our firewall and is routed to the appropriate internal IP address. In this case, we map all HTTP (port 80) requests to xx.xx.xx.16 port 80 which is an IP address on our IBM i running the PROXY web server instance.
The server sees the request is really for www.fieldexit.com and forwards the request to IP address xx.xx.xx.23, which also is on the same IBM i and running it's own web server instance, and therefore has it's own configuration.
It's pretty simple, but setting it up and getting everything correct was a bugger at first. But now that it's been done once, it can easily be copied should we need to run additional web servers on our IBM i.
Setting up HTTP Instances and configuration files can be tricky. Some choose to use "Wizards" or the HTTP Sever Admin service. Both I feel are overly complicated for the majority of sites.
When we set up a new site, the first thing we do is create the server instance. This is as easy as creating a member in file QUSRSYS/QATMHINSTC with the name of the instance we want. In this case, we called in GREENBOARD.
The sole purpose of this member is to tell the IBM HTTP server where to find the configuration file, as well as a few other settings:
-apache -d /www/conf -f greenboard.conf -AutoStartY -uiMin 10 -uiMax 300
The only things you need to worry about changing are the path (/www/conf) and the name (greenboard.conf) of the HTTP configuration file that will be used.
The other parameters, such as AutoStart and uiMin and uiMax (which tell the server how many threads/job to start right away, and the maximum number of threads/jobs to run) can be tweaked along the way. I've found that with busy sites that use a lot of Server Side Includes (SSIs) it's good to have a larger uiMax number or sometimes the server will "choke" on all the requests.
The next step is to create our configuration file. In this case, it will be in the IFS and will be /www/conf/greenboard.conf. The contents are as simple as the following:
Listen xx.xx.xx.23:80
LogFormat "%h %l %u %t \"%r\" %s %b\"" Common
ScriptLog /www/logs/greenboard/cgierrorlog
ErrorLog /www/logs/greenboard/errorlog
UseCanonicalName Off
CustomLog /www/logs/greenboard/accesslog Common
CustomLog /www/logs/greenboard/refererlog "%{Referer}i -> %U"
ErrorDocument 404 /index.html
CGIConvMode %%EBCDIC/EBCDIC%%
SetEnv GREENBOARD_SYSID fieldexit
DocumentRoot /www/greenboard/html
DirectoryIndex index.html
ScriptAliasMatch ^/forum/(.*) /qsys.lib/greenboard.lib/$1.pgm
<Directory />
Options None
order deny,allow
deny from all
</Directory>
<Directory /www/greenboard/html>
order allow,deny
allow from all
<FilesMatch "\.html(\..+)?$">
Options +Includes
SetOutputFilter Includes
</FilesMatch>
</Directory>
<Directory /qsys.lib/greenboard.lib>
allow from all
order allow,deny
Options +ExecCGI +Includes
SetOutputFilter Includes
</Directory>
As far as HTTP configurations go, this one is pretty simple.
It tells the server the document root, how to run CGI programs (ie, /forum maps to /qsys.lib/greenboard.lib) and sets authorities to directories and libraries used by the web site. It also sets up the ability to use Server Side Includes (SSI) in both static and dynamic (CGI) pages.
One special thing in this setup is the use of the SetEnv keyword. What this allows us to do is set environment variables for the entire site. In this case, we set the value of a environment varialble named GREENBOARD_SYSID to "fieldexit". This is really the top level key to the databases that are used on the site to store posts and designed the forum layout. This was done so that if we ever created a new discussion forum (with a new name) we could key it differently by specifying a different environment variable for GREENBOARD_SYSID.
So, in our applications we simply use the following to retrieve the environment variable:
sysID = #GetEnv('GREENBOARD_SYSID');
The #GetEnv() subprocedure is part of the eRPG SDK, but with most systems like CGIDEV2 there are similar methods for this.
Now I invite you to go take a look at a configuration created by a "wizard". I think you'll find it's quite cumbersome and complicated and really doesn't do much more, and possibly even does less than this simple configuration.
We recently put together an article on how to use Google Sign-In with your Web Applications on the IBM i. We felt it was a good article as well as a Proof of Concept, so we're linking it here:
Google Sign-In Integration with your IBM i RPG Web Application
We wanted to put together a unique way of keeping a log of posts, replies and edits on the system and thought it would be a good idea to use the Google Calendar addon for our GreenTools for Google Apps (G4G) application.
The first thing we did was create a Google Calendar specifically for this project. That was easy enough and doesn't need to be documented.
The next step was to get the ID of that calendar as well as set it up in G4G. This was done using the G4GLSTCAL (List Calendars) command. This will list all the calendars available for our Google ID and we can find the proper ID to use for the rest of the application.
The next thing we needed to do was write a program that could be called passing in an action (ie, edit, reply or add) an a thread ID. Each message on the system has a unique thread ID, so once we have that we should be able to get any other information we need.
We chose not to make a subprocedure out of this. Mainly because we were going to be submitting the call to the program so that when a message is posted or updated, the time it took to add the calendar event wouldn't be inline with the actual posting.
As mentioned our program accepts two parameters, a thread ID and an action.
Once we have those items, we query the thread file using the thread ID and build our calendar event. A portion of the ADDCALEVT program is as follows:
calendarID = 'uacckdvn88rtn765ps9ei8vk1g@group.calendar.google.com';
eventSubject = '[' + %trim(inAction) + ']:' + %trim(SUBJECT);
eventDesc = 'Thread ID ' + %trim(THREADID) + ' had the action ' +
%trim(inAction) + ' performed by ' + %trim(AUTHOR) + '. ' +
'https://www.fieldexit.com/forum/display?threadid=' +
%trim(THREADID);
rc = #g4gcal_setValue('id':'bvstone@gmail.com');
rc = #g4gcal_setValue('calendar_id':calendarID);
rc = #g4gcal_setValue('event_title':eventSubject);
if (inAction = 'edit');
eventDate = EDITDATE;
else;
eventDate = POSTDATE;
endif;
rc = #g4gcal_setValue('start_date':%char(%date(eventDate):*USA0));
rc = #g4gcal_setValue('start_time':%char(%time(eventDate):*HMS0));
rc = #g4gcal_setValue('end_date':%char(%date(eventDate):*USA0));
rc = #g4gcal_setValue('end_time':%char(%time(eventDate):*HMS0));
rc = #g4gcal_setValue('event_description':eventDesc);
rc = #g4gcal_addEvent(eventID);
Now we see why we needed the calendar ID. When we add a calendar event we obviously need to know the ID of the calendar to add it too. So, we set that, as well as a few other things (start time, end time, event title and event description).
When we finally call the #g4g_addEvent() procedure an event should be added to our Google Calendar.
Now, this program is actually called from the program that adds or updates posts on this page, like this:
QCmdCmd = 'SBMJOB CMD(CALL PGM(ADDCALEVT) PARM(''' +
newThreadID +
''' ''edit'')) ' +
'JOB(GBFCAL) JOBQ(*LIBL/QSYSNOMAX)';
callp(e) #QCmdExc(QCmdCmd:QCmdLength);
Depending on if it's an edit, reply or add the appropriate value will be passed to the ADDCALEVT program.
Anyone who has a site wants to make sure that to web crawlers find their site, and when they do get the proper information so it can be indexed.
If you dig really deep into Search Engine Optimization (SEO) things get pretty complicated. But, on the surface there are a few things web developers can do to nudge their site up in the lists. One of them is using Sitemaps. You can read about Sitemaps here.
Sitemaps make it easier for web crawlers to index and search your site. There are even a few webpages that will create Sitemaps for you that you can download and place on your server for crawlers to find.
But what if your site is dynamic, like this one? That means you'll probably want to have a dynamic Sitemap as well. Here's how we did it for this site.
Step 1 - Create an eRPG/CGI program that will output dynamic XML for the web crawler to use
In our case, we have a few static pages, and many dymamic pages (each post will be it's own page). So we can start out by creating a template with a section for static content.
/$top
<?xml version="1.0" encoding="UTF-8"?>
<urlset
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9
http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd">
<url>
<loc>https://www.fieldexit.com/</loc>
</url>
<url>
<loc>https://www.fieldexit.com/forum/list</loc>
</url>
<url>
<loc>https://www.fieldexit.com/login.html</loc>
</url>
<url>
<loc>https://www.fieldexit.com/signup.html</loc>
</url>
<url>
<loc>https://www.fieldexit.com/changepw.html</loc>
</url>
Next, we will want to index our group lists, forum lists, and finally our threads (posts) so each of them gets their own entry in the Sitemap file.
/$groups
<url>
<loc>https://www.fieldexit.com/forum/list?groupid=/%groupid%/</loc>
</url>
/$forums
<url>
<loc>https://www.fieldexit.com/forum/thread?groupid=/%groupid%/&forumid=/%forumid%/</loc>
</url>
/$threads
<url>
<loc>https://www.fieldexit.com/forum/display?threadid=/%threadid%/</loc>
<lastmod>/%lastmod%/</lastmod>
</url>
/$end
</urlset>
Our eRPG program will then read through the group, forum and thread file and create an entry corresponding to each of them:
H DFTACTGRP(*NO) BNDDIR('GREENBOARD')
****************************************************************
* Prototypes *
****************************************************************
/COPY QCOPYSRC,P.ERPGSDK
/COPY QCOPYSRC,P.GBFORUM
****************************************************************
* Copy Members *
****************************************************************
/COPY QCOPYSRC,SQL
****************************************************************
D GROUPDS E DS EXTNAME(GROUPPF) PREFIX(g_)
D FORUMDS E DS EXTNAME(FORUMPF) PREFIX(f_)
D THREADDS E DS EXTNAME(THREADPF) PREFIX(t_)
*
D sysID S LIKE(g_SYSID)
****************************************************************
/free
Exec Sql Set Option Datfmt=*Iso, Commit=*None, Closqlcsr=*Endmod;
sysID = #gbf_getSysID();
#startup();
#writeTemplate('stdxmlheader.erpg');
#loadTemplate('sitemap.erpg');
#writeThisSec('top');
EXSR $Groups;
EXSR $Forums;
EXSR $Threads;
#writeThisSec('end');
#cleanup();
*INLR = *on;
//-------------------------------------------------------------/
// List Groups /
//-------------------------------------------------------------/
begsr $Groups;
#loadSection('groups');
exec sql
declare C1 cursor for
select GROUPID, GROUPDESC
from GROUPPF
where
SYSID = :sysID and
GROUPID <> 'test';
exec sql open C1;
exec sql fetch from C1 into
:g_GROUPID, :g_GROUPDESC;
dow (xSQLState2 = Success_On_Sql);
#replaceData('/%groupid%/':g_GROUPID);
#writeSection();
exec sql fetch from C1 into
:g_GROUPID, :g_GROUPDESC;
enddo;
exec sql close C1;
endsr;
//-------------------------------------------------------------/
// List Forums /
//-------------------------------------------------------------/
begsr $Forums;
#loadSection('forums');
exec sql
declare C2 cursor for
select GROUPID, FORUMID, FORUMDESC
from FORUMPF
where
SYSID = :sysID and
GROUPID <> 'test';
exec sql open C2;
exec sql fetch from C2 into
:f_GROUPID, :f_FORUMID, :f_FORUMDESC;
dow (xSQLState2 = Success_On_Sql);
#replaceData('/%groupid%/':f_GROUPID);
#replaceData('/%forumid%/':f_FORUMID);
#writeSection();
exec sql fetch from C2 into
:f_GROUPID, :f_FORUMID, :f_FORUMDESC;
enddo;
exec sql close C2;
endsr;
//-------------------------------------------------------------/
// List Threads /
//-------------------------------------------------------------/
begsr $Threads;
#loadSection('threads');
exec sql
declare C3 cursor for
select THREADID, SUBJECT, AUTHOR, POSTDATE, EDITDATE
from THREADPF
where
SYSID = :sysID and
GROUPID <> 'test';
ACTIVE = 'Y';
exec sql open C3;
exec sql fetch from C3 into
:t_THREADID, :t_SUBJECT, :t_AUTHOR, :t_POSTDATE, :t_EDITDATE;
dow (xSQLState2 = Success_On_Sql);
#replaceData('/%threadid%/':t_THREADID);
if (t_EDITDATE <> *LOVAL);
#replaceData('/%lastmod%/':%char(%date(t_EDITDATE):*ISO-));
else;
#replaceData('/%lastmod%/':%char(%date(t_POSTDATE):*ISO-));
endif;
#writeSection();
exec sql fetch from C3 into
:t_THREADID, :t_SUBJECT, :t_AUTHOR, :t_POSTDATE, :t_EDITDATE;
enddo;
exec sql close C3;
endsr;
This program, which uses the eRPG SDK, is fairly straightforward. First it outputs the static section of our template, then it will read through the group, forum and thread files and output items for each of those. We've been using SQL more and more these days but you could easily use native I/O (ie SETLL, READE) processing for this as well.
UPDATE (08/26/2014):
Our program has been updated to exclude any threads with a group id of "test". This is because we set up a "test" group and forum where users can play with the editor, play around with the site, etc. We now exclude these from our sitemap so that they are not indexed (or, at least we're talling google to not index them, we may need to set up a robots.txt file to tell it to ignore those posts as well).
The final result can be seen by clicking here.
Step 2 - Tell Google (or Bing, etc) Where your Sitemap file is
For this example we'll focus on using Google and their Webmaster tools. Bing is similar and I belive Yahoo may also have their own set of tools.
For Google, you'll go to the Webmaster Tools. Next, select the site you want to work with (or set one up if you haven't yet!). On the dashboard for your site you should see (currently it's the far right) a section for sitemap. If you have a sitemap set up already, it will tell you how many pages it has indexed using your Sitemap. If not, it will give you an option to tell it where your sitemap is.
In our case, we stuck with the default location by specifying /sitemap.xml in the root of our server as our sitemap file.
You're probably thinking "But wait! your Sitemap is a CGI program! Not a static file!" That's true, but using Server Side Includes (SSIs) we can populate what the website crawler things is a static file with dynamic content from our CGI program creating earlier.
We will create a file in our root named sitemap.xml and it will contain the following code:
<!--#include virtual="/forum/sitemap" -->
In the case of this site, we have the /forum directory mapped to run CGI programs. A lot of time's you'll see /cgi-bin used, but we decided to use something different. Also, "sitemap" is the name of the CGI program we created earlier to dynamicall produce our sitemap XML data.
We also need to make sure our Apache server configuration will parse SSI directives in documents that end in XML. Right now if you have an Apache server set up, it's probably only set up to look for SSI directives in HTML pages like so:
<FilesMatch "\.html(\..+)?$">
Options +Includes
SetOutputFilter Includes
</FilesMatch>
But, we can easily add files that end in .xml to this directive so they will be processed as well by changing it to this (and of course stopping and restarting the server for the changes to take affect):
<FilesMatch "\.(html|xml)(\..+)?$">
Options +Includes
SetOutputFilter Includes
</FilesMatch>
Now when our apache server serves up a file ending in .xml (like our sitemap.xml file) it will look for SSI directives and process them.
You can see that the results by clicking here which is a link to the sitemaps.xml file for this site. You'll notice it's exactly the same as the output created by calling the CGI program directly.
Now, any time a thread is added or updated, the information in the Sitemap file for our site will be automatically updated, and when the Google crawler comes by next time, our site will (hopefully!) be reindexed.
AJAX is a great little tool for any web programmer to take advantage of. It can make pages seem more "interactive" and can help make youre site seem more professional.
One way that we're using AJAX in the Field Exit site is for validating fields in web forms.
Examine the following web page source for our Log In page:
<form id="updateForm" action="/forum/login" method="POST">
<input name="id" id="loginID" type="hidden">
<table>
<tr>
<td>User ID:</td>
<td><input class="validateField" id="userid" name="userid">
<div class="error" id="useridError"></div>
</td>
</tr>
<tr>
<td>Password:</td>
<td>
<input class="validateField" id="password" name="password" type="password">
<div class="error" id="passwordError"></div>
</td>
</tr>
<tr>
<td colspan="2"><a class="button roundedall" id="loginButton" href="#">Log In</a></td>
</tr>
</table>
</form>
This is a pretty standard form that has three input variables. First, a hidden field name "id" and fields so the user can enter their userid and password. You'll see each of the fields has not only a name (which is used when reading the value of it in our CGI program) but an ID as well, which should be unique. Each field also has a class named "validateField". This specific class will be used by jQuery to determine which fields will actually be validated.
There is then a <div> under each input field and the ID attribute for them is the same as the ID of the field, plus "Error". This <div> will be used to display any errors that are specific to the field it is related to.
When a user clicks the loginButton, jQuery is called to first perform validation on any fields with a class of "validateField", as showen in the jQuery (JavaScript) specific to our login page:
$(document).ready(function(){
$('#loginButton').click(function(e) {
e.preventDefault();
$('#loginID').val(randomString(256));
$('#updateForm').submit();
});
});
When the loginButton is clicked, we first prevent the default from happening (ie, the hyperlink to go the specified location, this is because we're using hyperlinks with CSS for our buttons instead of "real" buttons). Then we create a random string and change the value of the hidden ID field in our form to this value (this is just for validation so we know if it's a real person or not doing it).
Now, when the .submit() method is called for the form, that will trigger another event to fire (which is is our base javascript source):
$("#updateForm").submit(function(e) {
e.preventDefault();
$('.validateField').each(function(){
validateField($(this).attr('id'),1);
});
});
This next section of source will be run when a form with an ID of "updateForm" is submitted. Again, first we prevent the default form action from happening (ie, submitting the form). This is because we want to validate any fields for errors. The jQuery to follow will loop through each (using the .each() jQuery function) of the fields in the form with a class of "validateField" and call the validateField() function, which looks like the following:
function validateField(inID, index) {
xhr_get(inID, index, '/forum/a.validate').done(function(data) {
$('#' + inID + 'Error').text(data);
if (data.trim().length > 0) {
numberOfErrors++;
}
});
}
The validateField() function will make an AJAX call (which is the call to the xhr_get() function). If an error is returned, we increment the numberOfErrors counter (which is set to zero in another spot we'll cover later).
The AJAX function looks like the following:
function xhr_get(inID, index, url) {
var field = $('#' + inID).attr('name');
var dataValue = $('#' + inID).val();
var formData = $("#updateForm").serialize() + '&field=' + field + '&index=' + index;
return $.ajax({
type: 'POST',
url: url,
//async: false,
data: formData
})
.fail(function() {
// failed request; give feedback to user
return('<p class="error"><strong>Error validating field.</p>');
});
}
The AJAX call will be to an eRPG program named A.VALIDATE and it will pass in, at a minimum, the field name and the field data that we are validating. It will also pass in any other name/value pairs that may be in the form using the .serialize() method. When the AJAX call is done, it will have either returned blanks (no errors) or an error message specific to that field. We can see then when control is returned back to the validateField() function, if there was any data we increment a counter for the number of errors.
This is where it gets a little tricky. jQuery has a lot of built in functions that allow you to do some pretty cool stuff. One of them is you can run a function when an AJAX call starts, and another when an AJAX call ends. So, we use those to know when the validation starts and when it stops:
$(document).ajaxStart(function() {
numberOfErrors = 0;
showMessage('Validating data...');
});
$(document).ajaxStop(function() {
$.unblockUI();
if (numberOfErrors <= 0) {
showMessage('Processing data...');
$("#updateForm").unbind().submit();
}
});
So, the first time our AJAX validation program is called, we reset numberOfErrors to zero (and show a message on the screen that also locks any UI which is done using the blockUI addon for jQuery UI) When the AJAX call ends, we check the number of errors. If it's less than or equal to zero, we finally let the form submit by "unbinding" it from the previous .submit() action and then submitting it (this way it won't get caught in a "loop" and never get submitted).
If there are any errors returned, we see that the validateField() function updates the division related to the field (ie, <fieldID>Error) with the text returned. Here's the specific line from the validateField() function that does that:
$('#' + inID + 'Error').text(data);
So, if the data is blank, the div will remain empty (or get "reset" if there was an error there previously). If there is an error, then it will show in the <div> for the field.
Now, our AJAX call is to a program named /forum/a.validate. That just means it's calling our eRPG program named "A.VALIDATE". I like to prefix any AJAX specific programs with "A.", that's why it's named that way.
Again, this program is pretty straightforward. The only "downside" to this is that for each special validation you need to update this program to accommodate that. Not a big deal, but still something that needs to be done. There are generic functions for checking a field isn't blank, and for checking numeric values, but other than that each subroutine is specific to each field.
H DFTACTGRP(*NO) BNDDIR('GREENBOARD')
****************************************************************
* Prototypes *
****************************************************************
/COPY QCOPYSRC,P.ERPGSDK
/COPY QCOPYSRC,P.STRING
/COPY QCOPYSRC,P.GBFORUM
/COPY QCOPYSRC,P.VLDL
****************************************************************
* Copy Members *
****************************************************************
/COPY QCOPYSRC,SQL
****************************************************************
* Data read in from page
D inField S 128
D inIndex S 10i 0
*
* Work Variables
D data S 1024
D message S 1024
D inUserID S 256
D inNewCPassword S 256
D i S 10i 0
D n S 25 5
****************************************************************
/free
Exec Sql Set Option Datfmt=*Iso, Commit=*None, Closqlcsr=*Endmod;
#vldl_setVldl('GBFUSERS':'GREENBOARD');
#startup();
message = ' ';
exsr $Input;
exsr $Validate;
#writeTemplate('stdhtmlheader.erpg');
#loadTemplate('a.validate.erpg':'top');
#replaceData('/%data%/':message);
#writeSection();
#cleanup();
return;
//*INLR = *on;
//-------------------------------------------------------------/
// Validate the Data /
//-------------------------------------------------------------/
begsr $Validate;
select;
when (inField = 'newuserid');
exsr $newuserid;
when (inField = 'fpwuserid');
exsr $fpwuserid;
when (inField = 'newpassword');
exsr $newpassword;
when (inField = 'password');
exsr $password;
when (inField = 'newemail');
exsr $email;
when (inField = 'none');
// do nothing here...
message = ' ';
other;
exsr $notBlank;
endsl;
endsr;
//-------------------------------------------------------------/
// Validate New User ID /
//-------------------------------------------------------------/
begsr $newuserid;
select;
when (data = ' ');
message = 'User ID connot be blank.';
other;
select;
when (#vldl_userExists(data));
message = 'User ' +%trim(data) +' already exists.';
endsl;
endsl;
endsr;
//-------------------------------------------------------------/
// Forgot Password User ID /
//-------------------------------------------------------------/
begsr $fpwuserid;
select;
when (data = ' ');
message = 'User ID connot be blank.';
other;
select;
when (not #vldl_userExists(data));
message = 'User ID ' +%trim(data) +' doesn''t exist.';
endsl;
endsl;
endsr;
//-------------------------------------------------------------/
// Validate Password /
//-------------------------------------------------------------/
begsr $newpassword;
select;
when (data = ' ');
message = 'Password cannot be blank.';
other;
if (data <> inNewCPassword);
message = 'Passwords do not match.';
endif;
endsl;
endsr;
//-------------------------------------------------------------/
// Validate Password /
//-------------------------------------------------------------/
begsr $password;
select;
when (data = ' ');
message = 'Password cannot be blank.';
other;
i = #gbf_isValidUser(inUserID:data);
select;
when (i < 0);
message = 'Invalid password for ' +
%trim(inUserID) + '. rc(' + %char(i) + ')';
when (i = 0);
message = 'User is not active.';
endsl;
endsl;
endsr;
//-------------------------------------------------------------/
// Validate email address /
//-------------------------------------------------------------/
begsr $email;
select;
when (data = ' ');
message = 'EMail Address cannot be blank.';
other;
if (%scan('@':data) < 2);
message = 'Invalid Email ADdress.';
endif;
endsl;
endsr;
//-------------------------------------------------------------/
// Validate Number (no decimals) /
//-------------------------------------------------------------/
begsr $number;
select;
when (data = ' ');
message = 'Value cannot be blank.';
other;
monitor;
n = %dec(data:15:0);
on-error;
message = 'Value must be a number with no decimal places.';
endmon;
endsl;
endsr;
//-------------------------------------------------------------/
// Validate Number (2 decimals) /
//-------------------------------------------------------------/
begsr $number2d;
monitor;
n = %dec(data:17:2);
on-error;
message = 'Value must be a number with ' +
'maximum of 2 decimal places.';
endmon;
endsr;
//-------------------------------------------------------------/
// Validate Field for Not Blank /
//-------------------------------------------------------------/
begsr $notBlank;
if (data = ' ');
select;
when (inField = 'subject');
message = 'Subject cannot be blank.';
when (inField = 'postdata');
message = 'Please enter a message.';
when (inField = 'userid');
message = 'User ID cannot be blank.';
when (inField = 'password');
message = 'Password cannot be blank.';
when (inField = 'newuserid');
message = 'User ID cannot be blank.';
when (inField = 'newpassword');
message = 'Password cannot be blank.';
when (inField = 'newemail');
message = 'EMail Address cannot be blank.';
other;
message = 'Invalid field ' + %trim(inField) + '.';
endsl;
endif;
endsr;
//-------------------------------------------------------------/
// Read input from web page /
//-------------------------------------------------------------/
begsr $Input;
inUserID = #getData('userid');
inNewCPassword = #getData('newcpassword');
inField = #getData('field');
inIndex = #CtoN(#getData('index'));
if (inIndex <= 0);
inIndex = 1;
endif;
data = #getData(inField:inIndex);
endsr;
/end-free
So, what we do here first is read the input data from the web page. Again, at the very least we will have a field named "field" that names the data field, and the data associated with that field.
We then use a trusty SELECT statement to determine which subroutine to call to validate that particular piece of data. We also have one special case for a field named "none" so that if we do have a web page with no validation, we can put a hidden field with this name in it and still process things through our AJAX validation.
If an error is found, it is written out to the page, which then is returned to the validateField() function and the error can then be displayed on the web page.
You can test some of this out by going to the login screen and entering some invalid information, or just leaving the fields blank. Have fun!
We recently updated our website so that when you click on a link to an article or post, instead of loading an entire new page with that article we display it's contents in a "Window" that is created using functions included in jQuery UI.
Now, I saw "Window" in quotes because it's not an actual window, but a HTML/CSS representation of a window. Either way, it's pretty neat.
The first issue he had was that all the links to the posts and articles looked like this:
https://www.fieldexit.com/forum/display?threadid=255
Just your normal every day standard hyperlink calling an application and passing some data to it (in this case the thread ID).
We didn't want to have to go through each application and link that already exists to update things, so we decided to find a solution using jQuery and jQuery UI. Not only does will this allow us to customize the experience of visitors, but it will also not disrupt bots and crawlers indexing the site.
From our past adventures with jQuery we know that pretty much anything is possible. We know that for clicks, keyboard presses, etc we can catch those actions and override them to do what we want. So, we created a new function and placed it in our document.ready() function:
$(document).ready(function(){
.... other jQuery functions
displayPostWindow();
}
The "displayPostWindow() function is used to override the action when a user clicks on a hyperlink that leads to a post. That function is as follows:
function displayPostWindow() {
$('body').on('click', 'a[href^="/forum/display"]', function(event) {
event.preventDefault();
$.ajaxSetup({
global: false
});
if (typeof theDialog == "undefined") { //check if div already exists
theDialog = $('<div id="postWindow" />').dialog({
autoOpen: false,
resizable: true,
modal: true,
width: "90%",
buttons: [{
text: "Close",
click: function() {
$( this ).dialog( "close" );
}
}]
});
}
theDialog.html('<img src="/images/loader.gif"/>').dialog('open');
theDialog.dialog({title: 'Loading content...'});
var url= $(this).attr('href') + '&heading=n';
$.get(url, function( data ) {
subject = $(data).find('h3').text();
theDialog.dialog({title: subject});
theDialog.html(data).dialog();
})
.always(function() {
$.ajaxSetup({
global: true
});
});
})
}
At first glance this may seem a bit complicated, but it's really not. Let's break it up into sections.
The first thing that happens is overriding the action that takes place when a user clicks on a hyperlink. In the past the click would calls the "display" server side program and display the post. We overrid this action using the following code:
$('body').on('click', 'a[href^="/forum/display"]', function(event) {
event.preventDefault();
We are matching only clicks on hyperlinks that actually call the display server program using string matching techniques. We then use the preventDefault() function to stop the default action (which would be to call the display program).
Next, we turn off the global flag for Ajax requests. Part of this solution will be using the jQuery get() function which is really a simple wrapper for an Ajax call. We already had a the blockUI addon set up so that on Ajax calls we display a "Please Wait" page block while we process a request (such as posting a new article). We don't want this to happen when we retrieve the information for the message to display which is why we use the ajaxSetup() jQuery function:
$.ajaxSetup({
global: false
});
Later we will change this flag back to true so that other functions aren't messed up.
This next piece of code is used to create a Dialog object. It may look like a lot of work, but we need to make sure we don't duplicate the object. In the next section we are setting up the Dialog (which is part of jQuery UI) element that is used as the "window".
if (typeof theDialog == "undefined") { //check if div already exists
theDialog = $('<div id="postWindow" />').dialog({
autoOpen: false,
resizable: true,
modal: true,
width: "90%",
buttons: [{
text: "Close",
click: function() {
$( this ).dialog( "close" );
}
}]
});
}
theDialog.html('<img src="/images/loader.gif"/>').dialog('open');
theDialog.dialog({title: 'Loading content...'});
So, first we check to see if the Dialog object exists. If not, we create one and set the values for the dialog itself. Once that is complete we set the value of the dialog window to a loading image as well as set the title of the dialog to state that the data is being loaded.
The next step is to fill the dialog with the actual post information.
var url= $(this).attr('href') + '&heading=n';
$.get(url, function( data ) {
subject = $(data).find('h3').text();
theDialog.dialog({title: subject});
theDialog.html(data).dialog();
})
.always(function() {
$.ajaxSetup({
global: true
});
});
In order to call the jQuery get() function to retrieve the post information from our web application we need to create the URL to call. The URL we want to call actually already exists in the link almost exactly how we want it.
Because all of this code is running when a user clicks a hyperlink to view an article or post, we can refer to the link object itself as "$(this)". In other words, we can work with "this" link that was clicked.
We strip the href value from the link and tack on an extra parameter that will tell our application that when called using this method we don't need to display the header information. All we want is the post.
We then call the get() function. Once done all of the HTML, CSS, etc will be in the "data" parameter.
Because we want our dialog to have a title of the actual subject of the article we use the jQuery find() function to retrieve the subject which in this case is always wrapped in the <h3> container.
We then set the title of the dialog to the subject, and load the dialog object itself with all of the data returned from the get() function.
Finally, we set our Ajax global flag back to true. In this example we're doing it using the always() callback function of the get() function. This is so that no matter what happens with the call to the get() function this flag will always get set back.
The end result is pretty neat! We now get a pop up window that loads the post much quicker than displaying it as a separate page. This is mainly because all of the CSS, JavaScript and other resources don't need to be reloaded.
Also, for crawlers and bots indexing the site the actual links still exist and work as they did before.
We continue to find more and more neat things you can do with jQuery and the hundreds of addons created for it. It keeps things interesting and fun!
We have made a few updates to this method now. We decided that every time a message is clicked on it isn't always a good idea to show the "pop up" version of the message. So, in this cases (so far) the full message will be shown instead of the pop up version:
We will continue to look into other times this will be done and would love to hear when you feel the pop up version doesn't quite work as good.
How did we do this? It was actually quite easy.
We first added an attribute to each of the hyperlinks we wanted to bypass the pop up view method as such:
<a href="/forum/display?threadid=/%threadid%/" data-id="showfull">
As you can see we added a data-id attribute of showfull. (This really could have been anything, but we chose this value).
Next, in our Javascript we added an if statement around the pop up method as such:
function displayPostWindow() {
$('body').on('click', 'a[href^="/forum/display"]', function(event) {
if ($(this).attr('data-id') != 'showfull') {
event.preventDefault();
...
}
})
}
Have we mentioned how much we love jQuery? :)
This site uses recursion, mainly, to display message threads.
Think of the message thread structure as the roots of a tree. Each thread can have unlimited replies, and each reply can have unlimited number of replies. To parse them, we need to start at the top, and work our way down each root to the end, then back up to each branch and down again, until we are done.
Hopefully this image will help.
In this example a is our starting message in the thread. Messages b, c, and d are direct replies to message a. Messages e and f are direct replies to message b and so on and so forth. So we need to be able to traverse this "tree" in a method where we don't rely on a set number of "children" each post will have.
The use of recursion is a perfect fit for this type of processing. A simplified version of the code used for this is shown below:
*//////////////////////////////////////////////////////////////*
* #gbf_displayThreadMessages *
*//////////////////////////////////////////////////////////////*
P #gbf_displayThreadMessages...
P B EXPORT
*--------------------------------------------------------------*
D #gbf_displayThreadMessages...
D PI
D in_ThreadID 64 Value
*
D l_level S 10i 0 STATIC
D l_lastThreadID S LIKE(THREADID)
D l_lastReplyID S LIKE(REPLYID)
*--------------------------------------------------------------*
/free
l_level += 1;
if (l_level = 1);
OPEN THREAD2;
endif;
exec SQL
select
PATH into :PATH from THREADPF
where
ACTIVE = 'Y' and THREADID = :In_ThreadID;
#writeTemplate(PATH);
// display any replies
SETLL in_ThreadID THREAD2;
READE in_ThreadID THREAD2;
dow (not %eof(THREAD2));
l_lastThreadID = THREADID;
l_lastReplyID = REPLYID;
#gbf_displayThreadMessages(THREADID);
SETGT ThreadKey THREAD2;
READE in_ThreadID THREAD2;
enddo;
if (l_level = 1);
CLOSE THREAD2;
endif;
l_level -=1 ;
/end-free
C ThreadKey KLIST
C KFLD l_lastReplyID
C KFLD l_lastThreadID
*--------------------------------------------------------------*
P #gbf_displayThreadMessages...
P E
The main processing goes something like this:
This process then repeats for each reply to each message, and each reply to that message, which can continue forever, depending on the thread itself. Again, think of the roots of a tree to visualize when an entire thread may look like.
STATIC Variables
The first point to make is the use of the STATIC keyword when we are defining the l_level variable. This keyword means that the value of this variable will be available through each recursive call level of the subprocedure. This way we know how "deep" we are into the thread as well as the replies to the particular thread.
We need to know what "level" we are at so we can open and close our file at the right time. It's also used for other things (such as indenting replies so they appear in a "thread" format).
Because of this, the first thing we do is increment our level variables, and the last thing we do is decrement it.
Variables that are not declared as STATIC will be reset for each recursive call level we are at.
Traditional I/O vs SQL
The second point is that you see we're using traditional I/O (READxx, SETxx, etc) in part of this instead of SQL. That's because cursors used for SQL (at least on the OS version we are on) are also static. So if we open a cursor then recursively call the subprocedure again, it will try to open the same cursor, which is already open, and throw an error.
File pointers are also "static". This is why in our code we need to reset the pointer when control is returned to the base call level. You'll see to accomplish this we are storing the values of the last key values in the file before making the recursive call. When control is returned we use the SETGT operation to re-position the file pointer to where we left off.
Recursion is one of the most powerful tools that I've ever run across, especially for it's "simplicity". It's how programs are written to play chess, parse a bill of materials file, as well as other things. Simple, yet powerful, and when you find the perfect use for it, you'll know it.
We recently added the option to include a picture for your profile. You can find this in the User Control Panel (User CP) by clicking on your user ID when signed in.
A lot of sites allow you to upload your own image to the site. But, with the cloud and so many hosting sites, we chose to allow you to specify the path to a file somewhere on the net. In other words, why should we store it, when it is most likely already stored on a cloud drive somewhere, such as One Drive, Google Drive, or Photo Bucket.
To make sure the proper path is entered, we added functionality that will show the picture below the value.
See the example below:
If we change the path, the picture also changes:
This is all done with the magic of jQuery. And, it's a lot easier than you may think.
Below is a snippet of the HTML source code for the web page:
<tr>
<td valign="top">
Profile Picture:<br>
<span class="small3">(will be reduced to 50x50)</span>
</td>
<td>
<input size="100" id="newProfilePicture" name="profpic" value="/%profpic%/">
<br>
<img id="profilePicture" height="50" width="50" src="/%profimg%/">
</td>
</tr>
You'll see that we have ID attributes for the input field that holds the path to the image file, as well as for the picture (<IMG> tag) itself.
In days past, we would have done something like a JavaScript function on the input for when the cursor leaves the field to execute some JavaScript to change the source of the image file to the new value. But with jQuery, it's even easier.
$('#newProfilePicture').blur(function() {
var newSrc = $('#newProfilePicture').val();
$('#profilePicture').attr('src',newSrc);
});
The blur() method is used on the input field, and when it fires (because the cursor left the field) we use the attr() method on the image to change the SRC attribute.
This is where IDs come in very handy when naming objects. Classes can also be used, but in this case we are performing a very specific operation, so we want to use a unique ID on each of the HTML objects. (And IDs on web pages NEED to be unique, or jQuery just won't work right... don't ask me how I know that!)
It's really just that simple!