Science Tools Corporation
Copyright © 1997 - 2023 Science Tools Corporation All rights reserved
Disclaimer
About UsOur ValueProductsConsultingReference Reference Support
 

Questions and Answers

Product Specific Question and Answers
arranged by Topic

Set One | Set Two | Set Three | Set Four | Set Five | Set Six | All Topics


 

1.

Short-term or one-time customer

Question:

Clients may have a large number of short-term customers or one-time recipients of certain data products. Explain how your system accommodates such users.

Response:

We believe it is inappropriate to provide one-off solutions for special cases and instead products should be designed so that the normal case handles the exceptions in an identical manner - to the software, it's not an exceptional case. Given this viewpoint, our software will treat such customers like any other - they will have a place in the system, and may be described as completely, or incompletely as clients may wish to accept.

Clearly, if the data involved in such transactions are public, there is no concern and there would be no need to pose this question. We have been considering the case of private transactions for some time and we have increased our research and development efforts in the area of encryption and communications with customers in the context of scientific computing and results publishing/distribution. We are presently in the later stages of development of some clever methods which may be very useful for serving large numbers of short-term relationships which we may be able to demonstrate to you on or about July 19, 2001.

2.

Customer satisfaction

Question:

Explain how your solution addresses measurement of customer satisfaction and development of new products and services to meet changing market demands.

Response:

We believe that the correct approach is to permit customers - or prospective customers - to provide this data at the time they interact with the system. The need for development of new products, for example, can be expressed as a request for a product which the system cannot presently deliver. What's critical here is that customers have processes and procedures in place to regularly review such instances, though the software can automatically provide notification. We also believe that satisfaction feedback should not be something that is essentially forgotten for long periods, as is so common. Rather, feedback should be built into every customer interaction screen so it's always context specific. Only with such a mechanism can a true reflection of customers' perceptions be recorded.

3.

Tracking of data product requests

Question:

Fully describe how your solution may allow tracking of data product requests from placement to fulfillment.

Response:

In our system, product generation steps are discrete elements and are an integral part of the system itself. They are declared in the process of teaching BigSur how to perform your science. This includes the process flow, and necessary automation information. A product-generation request is implemented merely as a placement of a request in a queue for such processes to be started, with specific arguments provided by the requestor. Therefore, tracking is merely providing appropriate user-interfaces to report this information to appropriate parties at appropriate times, and with appropriate context and presentation format.

For example, while there may be fifteen steps in the generation of a given data product, it may be inappropriate to tell the customer about each discrete step. Instead, it may be more appropriate to provide a summary. Further, it may or may not be desirable to evaluate past process runs to calculate and provide an estimated time of completion, depending on who the customer is and other situation dependent criteria. This is the job of the user-interface.

Devising more sophisticated approaches with BigSur is easy. Keeping in mind that customer data-product production has a large customer-directed component, in contrast to creation of standardized data-products, it may be very beneficial to create an object within BigSur for each customer request, and populate it with the requisite meta-data about each scientific processing step required to achieve the final result. This would provide several benefits, such as easy isolation of individual requests for reporting purposes - an application GUI can track these steps and check them off against the reported run-status of a live processing queue. Since processing steps can be and hopefully are often done for the benefit of multiple-users, these individual user's plans can be used by process-planning software to group together and reorder requests to optimize resource utilization instead of merely placing process requests in the work-queue. (Note that identical process runs are automatically detected by the system and the default action is to never re-run a process - merely return the result that was obtained previously. Therefore, the latter case is not as bad as it might at first appear.) With a strategy like this, the specific security sensitivities of reporting status can then be attended to, allowing a customer to receive very detailed or not at all detailed status information, as appropriate. This is a straight-forward use of our system, and it's easy to perceive how this same feature can be put to other, further use to solve or address other problems or desires such as after-the-fact analysis of process flow to look for processing issues, opportunities to optimize, etc.

4.

Workflow management, job scheduling, and dynamic resource allocation

Question:

Describe how your solution allows effective implementation of workflow management, job scheduling, and dynamic computing resource allocation.

Response:

As stated above, process descriptions are integral to the system. Our system uses a work queue to provide for execution contexts - to instantiate processes that have been requested. Our Distributed Processing System provides for the dispatching of processes when and where processing is needed. The daemon processes that observe the work queue contain within them basic guidelines on what processes may or may not be dispatched where, and when. They observe, for example, whether the request for a process run is complete, what processing group the process is in, whether the appropriate time has come to run the process, and so on. The process itself, meanwhile, also has some smarts. Once started, it must look for its input arguments and confirm they exist, and then either re-queue itself if not, or continue on, or, perhaps, error out. Each process has control over its behavior and our system provides robust flexibility for controlling process behavior at run-time, including directing the process to attempt to restart after a previous failure.

Potential work-flow is actually declared directly into the meta-data the system manages. The system knows, for example, what the inputs to a given process are and what its outputs are. There are opportunities to explicitly declare that the output of one process is a particular input argument of another process, so processes can be "glued" together reasonably easily, thereby aiding automation. Also, as a process completes, it can check to see what processes might want to be run using the output that is being generated, and where possible, those down-stream processes are marked runable. In some cases, our customers actually queue up new processing jobs, depending on what the world looks like as the process is about to exit.

The system in and of itself just follows orders, and it's essentially process driven - when there's data to process and a process to do it, when the time has come to execute it and when there's a processing daemon ready for another job, it gets run. There are several explicitly noted locations within the system where it can be augmented for optimization at many different levels. And, good controls of existing components exist to permit easy extension to accommodate most any additional control, whether it be a form of automation, or manual intervention. Our comments in response to Concern 8 are relevant here as well.

5.

Monitoring and quality control

Question:

Provide information about methodologies and mechanisms used in your solution to monitor and control the quality of customers' products and services.

Response:

The question of quality control has two aspects in the context of this white-paper, one of which is the quality of the system software, and the other is the quality of the data products served to customers. The quality of generated data products largely depends upon the quality of customers processors and does not fall within the purview of our software, however, where confirming algorithms exist, our software may automate these. Such quality assurance algorithms may be built into the processing steps, or may be run on a one-off basis, perhaps determined when the data product is ordered, or as determined on the fly by the processing steps themselves.

Our software tracks status at every opportunity to do so, and it reports status into the database and records information into log files. Some of these log files are at the control of developers, while others are not optional and are recorded by the system upon every occurrence. Further, when errors occur, the system can send email messages to staff members, configurable at individual process and error severity levels. These logs provide systems staff the opportunity to discover problems when they occur so they can resolve problems and ensure quality.

Note that processes - discrete functional steps - may be developed specifically to monitor overall system behavior, and help assure quality at every level. Such processes may be easily automated by our system, and reports may be emailed to staff using our standard mechanisms for doing this.

 


Set One | Set Two | Set Three | Set Four | Set Five | Set Six | All Topics

 

 
Feedback
Contact Us

website contact: Webmistress

Science Tools > Top Level