BPC SurveyManager - Introduction

From RiskWiki

Jump to: navigation, search

Contents

BPC SurveyManager - Purpose, Origin and Capability

Introduction - The Purpose

BPC Survey Manager is an exceptionally powerful survey engine. It was originally conceived to support control self assessment, but in the intervening years has grown to service a bewildering range of web data collection scenarios. The best way to think of it is that the survey system is a specialized web page design and delivery engine that happens to be oriented to surveys, and therefore stores everything a user enters as if it was a unique survey response.


The current version of the BPC Survey Manager is used for purposes ranging from state-wide teritiary student surveys, performance measurement, compliance tracking, risk data collection, controls analysis, performance measurement, audit data collection, 360 Degree staff reviews, automated legal assessment, web site construction, and, oh - surveys. Its original purpose as a control self assessment tool is fundamental to its success as a general purpose survey tool.


Although the system comes with an array of management clients, once a survey has ben built, you can actually administer a survey using just an email client like outlook - which was how the very first version worked!


The Origin

Control self assessment is an audit concept from the 1980's that was designed to reduce the costs of audit and control by establishing a framework of control compliance checklists and performance records that are then completed by the operators of the various business processes and control systems in an organisation. The control self assessment forms effectively become compliance attestation statements that are completed in line with the relevant control cycle. So, if a supervisor had to do certain things on a weekly basis he or she might would complete a control form that attested to whether those things had been done and possibly the performance statistics associated with having done those things. This would generate a form each week. The result for management was a continuous up-to-date picture of systems performance, control and compliance.


The efficiency gain appeared when the audit process began. In a control self assessment control model, audit does not audit the underlying transactions, but rather they audit the veracity of the control self assessment system. They do this by testing the honesty of the person completing the control self assessment forms. Instead of sampling the entire transaction base of say a transaction system over a period - resulting in potentially large sample sets - audit block samples the control self assessment forms and tests the transactions that relate to the forms. Essentially a stop-go sampling method can be employed which delivers a pass of fail as to the reliability of the self assessment. Only if the control self assesment forms are found to be in error is a full statistical sample set required of the underlying transactions. Our meaurements of the efficiency gain were in the order of 30% reduction in audit cost.


The weakness in controlself assessment (CSA) when it was conceived in the 1980's was that cost of managing the mountains of paper control forms generated far outweighed the cost advantage acieved from more efficient audit methods. Consequntly CSA enjoyed minimal adoption in process design, with a few notable exceptions in Total Quality Control organisations. The advent of internet and intranet technologies provided a potential solution to that problem, but only if the cost of preparing the control forms could be minimised and the automation employed proove able to handle extreme data loads at low cost.


BPC Survey Manager Capability

For a control self assessment form is essentially a survey, but in order for a survey system, such as BPC SurveyManager, to handle it the survey engine must satify a few essential criteria:

  1. . It must be able to publish a survey without requiring a web programmer.
  2. . Surveys must be very fast to construct and deploy.
  3. . The surveys must be able to be published to specific people - not simply anonymously.
  4. . Survey responders should be able to receive invitations to take a survey via a simple email message with a clickable link.
  5. . The range of information collected must include text, numbers, dates, selection lists, weights, documents, etc.
  6. . The system must be able to collect data from both people and devices.
  7. . The data collected must be able to be analysed down to each responder - not just in the form of a poll or vote.
  8. . Content should be able to be re-used in other surveys
  9. . Questions should be uniquely identified across all surveys - not just within the survey, so that cross survey data analysis can be performed.
  10. . The content must be able to interface with the content of other surveys, previous results and change dynamically depending on the answers given.
  11. . Real-world organisational structures should be supported inlcuding matrix structures as well as divisional trees.
  12. . The survey must be able to be altered after commencement without invalidating the previously collected results.
  13. . The survey must be able to be delivered to every kind of input device - from a PDA to a laptop.
  14. . The survey should behave like paper - you should be able to complete it over days without having to re-key everything you already entered.
  15. . The survey engine and data must be future proofed so that it continues to work as the underlying technology evolves.
  16. . The data collection engine must be able to scale spectacularly.


The issue of scaling is partricularly critical. In a CSA environment, with 70,000 or more employees, completing a weekly CSA survey, they might all decide to complete it at 4:50 PM - ten minutes before leaving work for the weekend. The survey engine must be able to handle such a large load without falling over, or staff will simply not bother and the system will break down.


BPC Survey Manager handles all these requirements, and more.


BPC Survey Manager - System Components

Introduction to the Components

The vesatility of the survey management system means that it is potentially an extremely complex application in terms of how it works and what it can do. To deal with this problem, one of our ongoing problems is simplifying it to the user so you don’t need a degree in it to use it. 95% of the time a few simple capabilities are what is required, but the BPC Survey Manager system is designed to handle a larger array of very obscure scenarios. So there are often multiple ways of accomplishing the one task.


Essentially BPC SurveyManager comes in two broad components:

  1. The BPC Survey Engine – This is the library that actually delivers the surveys to a responders screen and also a range of reports and some administration functions, and
  2. The BPC SurveyManager Client - This is the component that designs and manages surveys that are responded to by the Survey Engine. It is primarilly an administrator's tool.


The BPC Survey Engine

The Survey engine is pretty straight forward. There is one of these and it does everything (at least as far as delivering surveys and getting responses). It is a stateless ISAPI dynamic link library (dll) that essentially operates like a web service, without the self publication component. It is designed to work on IIS servers version 5 and above, and can be in a secured or anonymous user access configuration. It is essentially insensitive to the version of IIS.


It does not care what version of windows you are running and will work on Windows 98/ Windows 2000/ XP/ 2003/ Vista/ Windows 7 and Windows 2008. It has an extremely low memory resident data load and is only a few megabytes in size itself. It can be installed by simply copying onto a web server, although there are some registry entries that would need to be added.


It connects to an SQL Server database via ADO (built into all MS OS's above Win 2000 and above) - MS SQL 2000, MSDT 2000, MS SQL Server 2005, MS Express 20005, MS SQL Server 2008, MS Express 2008. It can coexist with many instances of itself on the same IIS server, and does not care what it is called, and the survey engine can work with as many separate survey databases as you like and multiple engines can even talk to eachother.


A browser that connects to the survey engine never knows the name of the database to which the survey engine is connecting.


It can handle thousands of simultaneous users and be deployed in a web farm and can be stopped and started while operating with almost no impact. It should operate as an IIS worker process, and likes having the cycle time set - so it runs virtually unattended for years. Further you can run database backups without stopping the survey engine itself.


It has a built-in debug mode so you can get a dump of all what it is doing to build a page on a per survey basis when you are designing complex surveys if needed.


The BPC Survey Engine delivers HTML 4 code (but essentially using mainly the HTML 3.2 subset), but optionally supports both DHTML / CSS extensions and Javascript. this means that the surveys are essentially insensitive to the browser accessing them. Further it can optionally utilise templates and plugin libraries that use the Survey Engine plugin extension API.


The BPC Survey Manager Client

While the Survey Engine is really the heart of BPC Survey Manager, most people think of one of the clients as the "Survey Manager" system. This is understandable as this is the way most survey administrators see the system. Of course, survey responders never see the SurveyManager client as they just access the system via a browser, usually from a link sent in an emailed invitation.


The vesatility of the survey engine can rapidly create very complex management clients if all capabilities are surfaced at once. This has lead to an array of BPC SurveyManager clients that are delivered through a variety of mechanisms (Browser, Desktop Executable, and RiskManager) and serve specific requirements. There are a number of these and the work in different ways and deliver different combinations of the core engines capabilities – to get some semblance of simplicity to users.


These are the options:


  • BPC RiskManager – RM has a simplified SM client that thinks the world consists of only the “default” organization. It ignores filters, but surfaces properties and allows the creation of moderately complex surveys, but most importantly knows that there are RM tables in the database and can update those tables as well as the SM tables. It is an application server client so it talks to an intermediary layer which then talks to the database. It also has the advantage that it can make use of the RM script engine for playing with the results, assign and track actions arising from responses, note exceptions and
  • BPC SurveyManager DeskTop – This can do everything, and it is way complex as a result. Some of its screens are a bit brutal. The desktop client is the only way to make use of the distributed database capabilities of the surveyengine. Notionally the SM Desktop can be setup to talk to a local copy of the survey database on which you design a survey and then tell it to distribute the survey to a particular range of survey engine databases, and then later to get the results down from those databases. On the downside, it knows nothing about the fact that there is are RM tables in the database, so it ignores them.
  • BPC SurveyManager WebClient – the web client is designed for pure browser based management of surveys. It is intended for large scale survey databases with large numbers of organizations or organization units and supports bulk actions well (like importing large numbers of responders/users) and allows separate administration users for each organization and region (group of organizations) as well as whole of database “super users”. It has a simple survey creation model and very good multi-org deployment capabilities. It knows about templates, and can prevent administrators from changing questions in a survey that is being centrally deployed (where-as the Desktop assumes the only users using it have god like status). On the downside, it knows nothing about the existence of the RM tables and therefore ignores things like exceptions and action tracking. It also restricts the appearance to a pretty standard look where-as both the RM and Desktop clients allow you to completely manipulate the look of a survey. The web client also has a very good manual which is set at the idiot level. We use the web client to handle the Victorian (Australia) tertiary education student survey which spans 400+ organizations and thousands of students annually, with the 400 orgs each having their own administration area in the one database.


All these clients can be used simultaneously on the one survey database and the one survey. So a survey might be designed in one client and managed in another. They are not mutually exclusive.


The survey engine in the BPC RiskManager database is the full survey engine, and to make it work with BPC Risk Manager, the full survey engine database is merged into the risk amanager database. At this time, the survey manager client in the risk manager system assumes a single organisation for all surveys, relying on the risk manager application to distribute results where needed to risks and then – indirectly – across the risk amanager's view of the organisation structure in the held in the risk manager part of the database. The BPC SurveyManager desktop and web clients, do not know the BPC RiskManager system exists and they assume they are responsible for everything and therefore have a bit more power to them with respect to organization control.



BPC Survey Manager - Survey Components

The survey engine never actually stores a displayed page per-se, instead it dynamically builds every page line by line as required. Because of its original purpose, we call every line in the page a question but this is a little misleading as a line may actually be a picture, or a heading, or a hidden screen region, etc. In its simplest form a survey has:


Survey Header

This contains the administration control and general layout information for a survey. It acts as the hub for all the other components of a survey. The layout section allows you to provide any html layout you desire and wraps all the other parts of the survey. Everything that comes from the survey engine is referenced in this layout by special markup tags - including the entire survey body itself.

Survey Reminders

A survey can have any number of reminders that are plain text or HTML rich emailable messages with their own markup tags allowing the embedding of large amounts of custom information, including responses and reports from this or other surveys.

Survey Pages

The survey pages are dynamically generated as required in groups of questions we call "question groups". In most situations one or more question groups match a conventional survey page, but this is not required.

Survey Page Header

You can define custom headers that appear on survey pages, or, as is more common you just let the survey layout definition deliver the header and footer.

Survey Page Footer

You can define custom footers that appear on survey pages, or, as is more common you just let the survey layout definition deliver the header and footer.

Survey Questions

Survey questions represent a pool of questions that are dynamically added to each page as required. Survey questions are actually stored centrally so a the same question can be re-used in each survey and therefore its unique identifier will be the same in each survey it is used. This allow responses to be analysed across different surveys. Every survey question has an optional weight or performance rating that is stored as a floating point number with the response. This can provide a valuable insight into the reason for certain responses. The field can alos be used for any other purpose where a weight is desired.

A question has both a layout and a content portion allowing complex layouts in the question itself. Eg. In one of the example surveys we show a survey with an entire other survey embedded in one of the questions. You can even put the same survey into a question in a survey - effectively creating a recursive survey - like the effect of having two mirrors reflecting eachother.

Qustion layouts can range from conventional question - response structures through to report style layouts where the response is embedded as part of a body of text. This is ideal for generating management reports and other templated structures directly from the survey engine.

Distribution list

The distribution list is the list of people responding to the survey. We refer to the act of connecting a survey to its distribution list as "publishing" the survey.

Event messages

Event messages are a variety of events that will result in information being communicated to the user - such as the survey end, or survey access is not allowed, etc. These can all be customised if desired.

Rules

Because the questions on each page are actually dynamically generated, there is a rules engine in the system. Every question can have its own set of rules which may connect to the an external plugin to which it might pass the response, or from which it might collect some additional information (or even a replacement response). More commonly the rules will determine which questions to display to the user on the next page. the rules might cause survey to loop back on itself or reject the answer and request a different response.

Survey Responses

Ultimately the survey is about getting and recording responses. The responses are recorded uniquely by organisation, survey, instance, person, question, and realm. Wher the option is tuurned on, the importance rating of each question is stored with the person's response.


BPC SurveyManager - The Database Structure

A waterfall diagram of the Survey Manager database

The grossly oversimplified structure of the survey engine database is (object names repeat because they are indexed in multpiple ways and therefore the same object can be accessed with mutliple sets of indexes):


Database has:

Server Configurations
Instances
People(have)
.Access Rights
.Instances(have)
..Properties
..Filters
..Surveys(have)
...Instances(have)
....Responses
Data Folders
Archive
Publishing Servers
Reports
Organisation regions(have).
.Organisation units
.Responses[View]
Global Organisation
Organisation units(have)
.Data Folders
.Responses[View]
.Properties
.Questions
.People
.Surveys(have)
..Properties
..Responses[View]
..Instances (like January, February, etc)(have)
...Responses[View]
...Question Groups (like pages)(have)
....Properties

.

.

.

.

Questions

(have)

.

.

.

.

.

Responses

.

.

.

.

.

Properties

.

.

.

.

.

Filters

.

.

.

.

.

Rules

.

.

.

.

.

Select Ops

.

.

.

.

.

Numeric Ops

.

.

.

.

.

Date Ops

Some notes on the above tree diagram:

  • This tree layout is necessarilly simplified to provide a feel for the overall database structure using primary relationships. In actual implementation it is considerably more complex than this and there are many more cross relationships than shown.
  • Object names are repeated to show indexing relationships.
  • Some objects have been ommitted for clarity.
  • The object names do not correspond to the underlying tables names - rather the object names correspond to the 'purpose' of the object.
  • There is one response tables but it has multiple views built on top of it - hence the use of the [view] marker to indicate this.
  • Question groups are not a grouping mechanism rather than an indexing relationship. Questions can be directly indexed off the surveys, rather than requiring the queston group to be found as the diagram relationship implies.
  • Relationships are complicated by an implied global id for most objects called "default". The global organisation is really an organisation unit called "default" but things placed in in the gloabal organisation are accessible by all orgaisations regardless of any regional organisation grouping. Variations of this concept exist for other objects, such as the instances. A property defined with the default instance is visible in all instances, etc.
  • Realms are not illustrated. A realm is not a table so much as an index level that exists only in the response table to allow forking of response sets (see below).


Almost all of these structural parts have separate property tables (as shown) which are discussed later. The property tables provide a key additional dimension to the self modifying nature of the survey engine. In the structure above, some tables repeat. That is because logically they appear at both parent and child levels.


Organisation regions are actually a tree with the organizations as leaves (like a file directory tree) and people exist at the database level and are attached to organizations and surveys and survey instances. In each case there is a built-in object called "default" so if you don't want any organisations, use the "default" organisation, or only one instance, again use the "default" instance. If you want a survey that is automatically selected if the user does not specify a survey - build a survey called "default".


Storing Responses

When a response is stored it is unique to, and indexed by, organization-survey-survey instance-person-question and realm. Realms are not illustrated in the above tree diagram and are rarely used. A number of views are present which join the various tables back to the reponse table so that the response view can present a single point of report access (no need to join tables to see everything about a response including the question text. Views of the response table are provided which group the reponses by the various indexes and provide user and response counts appropriate to the grouping.


A Realm is a special kind of beast that allows the same instance of a survey to be forked, for example in response to a question. The default realm is '\'. In the absence of realms being used (the normal situation) all reponses are stored with the assumed realm '\'. A typical example of the use of a realm is a 360degree survey where a review of staff is completed many times for the same instance by the same person, but about different people. In this case the realm can be used to separate the responses about each person while preserving the same instance. (Note: There are other ways to do this using instances and/or properties.)


Input and Responses

Data Types

Almost anything can be a valid input (response) to a question in a survey. Where there is not a builtin facility for the desired input, customisable extensions are supported.


Input types include:

  • Numeric (integer and floating point) range restricted responses
  • Date range restricted responses
  • Date in multiple formats - text, numeric, pick box, drop list, etc.
  • Selectable lists (in many formats radio buttons, drop lists, lists, link sets, button, etc)
  • Text - single line, multi line text, multi-line WYSIWYG editor
  • File upload (any data type, any size up to 4GB)


Range Checks, Edit Checks and Exceptions

A number of methods for range and edit checks are supported. Range checks can be set to be validated on the client, or on the server. On the server out of range value handling can be customised through rules.


For compliance purposes, there are optional exception values held in each question so that exception reports can be easilly generated when a response falls into the exception range. The rules engine can be used to define specialised exception handling.


The Role of Instances

A survey always must have an instance in order to be available to a user (responder). Responses are stored recorded by an instance of the survey. A survey can have one of more instances attached to it.

An instance can be identified by any string you like and mean anything you like, but typically instances are things like January, February, Monday, Tuesday, Week01, Week02. Instances are grouped into groups of instances like adhoc, months, or years, or quarters, or days or weeks, etc. Thus one survey can be created and then it can be attached to many user defined instances.

A user completes an instance of a survey. If multiple instance of a survey have been published to the user the instances are made available in the order in which they are included in their instance group. Completing a survey instance does not automatically kick the user onto the next instance unless the auto-lock property is set to True. A user will move onto the next instance when the survey manager locks the preceding instance, unless one of the auto-locking options are chosen.


Instances allow different questions to be asked depending on a survey instance. Each question can be attached to specific instances or instance group, so you can have a question that only appears in January or only for instances that are quarters, etc.

Where you just want a single instance of a survey, and you don't want any specific instance control, just publish the survey to the "default" instance.


People, Survey Deployment and Survey Publishing

People can only access the system via the orgs to which they are attached. So a person must both exist in the database and be attached to an organization before they can do anything in that organisation.


Before a person can respond to a survey, the survey must be published to them AND one or more instances of the survey must also be published to them. The concept of attaching a person to a survey is called publishing, and the concept of attaching an instance of a survey to a person is called publishing the instance. An instance is a user defined, arbitrary identifier that allows the same survey to be responded to by a person uniquely of one of more occassions. Instances might be the names of months or week numbers, or years or anything else you wish. You can have as many instances of a survey as you like.


In databases with more than one organisation and additional option arises. When a survey is created it is created in an organization, it can then be deployed to any other organizations in the database so every organisation has the same survey. the act of duplicating the same survey to multiple organisations is called deployment.


It can be deployed with or without the instances attached. Lets assume it is deployed with the instances. Now after deployment the survey must be published to a list of users attached to each organisation, AND each user is granted access to all or some of the instances. If the same user is attached to multiple organisations, they can then get multpiple instances if the same survey to complete, but the responses are unique to each organisation and held with respect to each organisation.


When a survey is deployed, the survey questions do not have to be deployed with the survey. If not the original organisation's questions are used (and the local organisation administrator can not alter them) while the header can be customised in each organisation. This way each organisation can distribute the same survey to it's users, but with customised layout and livery. Reports can then deliver database, region and organisation specific views of the survey.


Properties and Filters

Amost every object (listed above) has “properties”. A property is a user (survey designer) defined defined storage loacation that can store any value less than 2000 characters long. Some properties are builtin (ie. have reserved names with special meaning) but as there is no limit on the number of properties per object there is plenty of scope for user defined properties. Properties can be displayed by inserting a tag that matches the property name in surveys and questions.


The property tables form a cascading tree that sits along side the users, questions, questiongroups, surveys and organizations. Each property has a user defined name with an instance and a value (which can also be changed by certain questions). So for the same property name a user may have a different value in different instances (eg in January versus February), and the property may have a value in a survey that is over written by a value in a specific question, etc. An example of such an property is the “Show Last Answer” property which shows the response the user entered to a question last time they answered it, which might be false for the survey as a whole, but true for a specific question. When accessed in a question text, the property for the question will take precedence over the same property defined at the survey level.


In additition to properties surveys, users and questions can have “Filters”. The filter tables are like a lock and key. A survey or question with a filter will only display to users that have the corresponding filter in their filter list – so the same instance of a survey can be delivered to both a manager user and a general staff user and they might see different questions. Filters can alsow be instance specific.

Properties and filters can be applied to all instances of a survey by setting the instance value of the property to "default" which means the property or filter applies to all instances.


Manual Publishing Versus Auto Publishing

Generally the concept is that unless you define a list of responders and publish your survey to that list, nobody will be able to access and respond to any instance of the survey. There are situations where you do not know who will respond. Sometimes you the database does not even know them yet, at other times they are known and members of your organisation, but you do not know that they will be responding to the survey ahead of time. In these cases you need a way for the survey to automatically publish itself to them when they try and access so they can answer it.


A survey, once created can be set to AutoPublish itself (by setting the organization or the survey’s AutoPublish property to true. In this case the survey does not need to be pre-published to a user, but will be automatically published to the user when they first attempt to respond to it. Similarly there are “auto’s” for other things like creating users, creating anonymous users, creating an automated instances of a survey to pre-existing users, etc – all all sorts of combinations of these things.


Question Display Selection

Introduction

Questions displayed in a survey are selected through a number of mechanisms, some of which we have already discussed. All mechanisms operate concurrently:

  • The question must belong to the current survey
  • The user must have the relevant survey instance available to them
  • The question must belong to the current instance group (or the "default" instance)
  • The question must belong to one of the current question groups being displayed on the current page (think: page number)
  • The survey filter (if defined) must match one of the user's filters.
  • The question filter (if defined) must match one of the user's filters.
  • The organisation, survey, questiongroup, question properties must not include a property that hides the question (eg "Invisible")
  • The Rules Script (if defined) must have selected the question for display. (See "The Rules Engine" below).


There are a few other ways that a question content may be hidden - such as using a content from a property in the question text that is not available to the current survey instance.


The Rules Engine

We won’t go into the rules engine part of the survey manager yet, except to note that it can analyse the reponses received per question for the current or any other question in this or any other survey in this or any other organisation, it has a natural language/pattern matching parser in it and that the rules engine can interact with plug-in libraries at the backend to send and receive responses to other systems. Any number of Rules can be defined on a per-question basis and the effect of executing the rules can be to modify the response, update another system and decide that the questions to be displayed on the next "page" a user sees. The rules work on the responses received in the current survey and other surveys.


Lastly, because of the property structure, filters, question level instances, question level exceptions and variety of input/reponse question types and other capabilities, you can gat a dynamically structrured survey running easilly without ever actually writing a rule. So they are completely optional.


Interfacing and Distribution

Distribution

The SurveyEngine is designed as a distributed database, so it can talk with other SurveyEngines. In fact the desktop client works by using the distribution capability of the survey engine. You therefore can have a test database on a PC in which you design surveys and then distribute the designed and tested survey to one or more publishing servers, and then use the distribution mechanism to retrieve the results, publish the surveys and update the surveys.


Interfacing

The survey engine itself has an API that can be called to perform a large number of functions, but further the engine supports a plugin API definition that is accessible via commands in the rules engine script that allows a library that matches the API to be dynamically loaded and accessed. The plugin architecture allows response data to be passed to the dynamically loaded library and results retrieved from it. Depending on the rules engine command used, the returned values may be written to the response table or simply tested against some value, and decisions about what questions to display on the next page made thereon.


Survey Layout

When we talk about a survey we can mean both what you would expect as a survey, and also just about any kind of web page you can dream up – whether it is notionally a survey or a blog, or even a menu screen that simply allows the user to select a survey from a list of surveys they wish to answer. One layout method for a survey actually allows you to lay up a report style layout and select the appropriate word in a sentence such as “I do/do not think this sounds simple.”


There default layout is a simple question-response table layout. There are also number of built-in layouts that arrange the questions and response in either a table, a grid or a custom format. Multiple layouts can be used in the one survey, and indeed on the same display page.


There is also support for a fixed survey that is essentially a MS Word document saved as an input form with the inputs tied into the survey engine response tables, but you lose a lot of the capability of the survey engine in this form (as the layout is fixed by the word document format), so it is not encouraged.


A particularly interesting layout is the what we call the "management report" layout. In this form the question text is a statement with selectable words/values/etc embedded in the text - rather than a question. You embedd the response part of the question in the statement using special tags and the survay appears to the user as a series of paragraphs, each with mutliple statements and each statement with one or more selectable responses that effectively construct a sentence. Since a survey question can contain both responses and response analysis from other surveys, it is possible to essntially template a management report by constructing a survey in this way that is the report that presents the results of other surveys in the survey text and invites the user to "complete the sentence" or "cross out the word not applicable".


Reporting

Standard Reports

You do not have to know anything about the database to get a report. Every survey automatically has a number of reports and groupings available without you doing anything. The reports use the survey layout to deliver their output. These reports are:

  • Individual responses by question and person
  • Response count by question
  • Responses by question
  • Responder's name by question
  • Count breakdown of responses by question
  • Percentage breakdown of responses by question
  • Percentage breakdown pie-chart by question


There are a number of predefined views that feed these reports and provide various groupings by survey:

  • By user
  • By Organisation
  • By Region
  • By Database


Special Reports

Individual Question Reports

In addition to the standard reports available for the survey as a whole, each type of report can be extracted for an individual question in any survey in an organisation by enbedding special report tags into the content of a question text, or a script command in the rules engine.


Data dump

In the event that a user wishes to extract the responses from the engine for analysis in another system, the data can be extracted using one of the views into a CSV file.


Archiving

An timestamped archiving table is available so that responses can be shifted out of the main response table into an archive.


User Access Control

User ID's are shared across all organisations in the database, but must have been granted access to an organisation before they can access anything in that organisation. Further before the user can answer a survey, they must have been granted access to both the survey and at least one instance of the survey.


Users have rights. The user rights are defined in terms of roles. A user has a global role and an organisation role. So a user may be a survey administrator in an organisation but only a responder at the global level.


The default built-in rights are:

  • Super administrator - database level administration rights
  • Region coordinator - rights to administer a group of organisations
  • Survey administrator - organisation level administration rights - including the right to create a survey.
  • Data entry - survey specific rights to enterr data in surveys on behalf of responders
  • User - rights to enter data in a survey too which they have been granted access.


BackLinks



CopyRight Bishop Phillips Consulting Pty Ltd 1997-2012 ( BPC SurveyManager - Introduction )
Personal tools