Thursday, May 29, 2014

SASP: Self Aware suggesting Programming


A technique which facilitates suggestions from a lower level software stack to upper level stack about the various actions or cross layer API calls that can be performed without a  foreseeable error. This allows the upper layer software software stack to decide at runtime what action and/or calls are supposed to made according to the lower level stack. In the existing stacks technique can be beneficial to avoid a large number of bugs that arise in the development/Integration phase.


Introduction
In any system there are different layers or components that interact with each other to facilitate a particular task. A similar abstraction is now proposed for different devices working in tandem to provide a complete user experience.


For example,  for the use case of capturing an image in any handheld device will require interactions of various layers of software i.e. The minimal implementation of use case will require firmware handling the camera interacting with the OS driver of camera, the post processing filters applied to the raw captured image, the post processed image is then passed on to the display which re-sizes & displays it on a view finder application or screen. When the capture button is pressed the image is passed on to a JPEG encoder and finally to saved to a storage device.


Similarly, it is proposed that functionality of each layer in the above example could be offloaded to cloud or a nearby device. The interfaces used to facilitate the collaboration can be implemented in user mode or a complete stack of software across virtualization, operating system, user mode and application level.


There are two points worth noticing in the above discussion. Firstly, there are various software stacks or devices which are interacting with each other to provide the required functionality. In practice this interaction is handled and fine tuned by engineers in a final product (as of now). 


Secondly, worth noticing here is most of above mentioned software/device have fixed way of interacting with the other software stack/device. You can pick any of the above OS driver (fixed semantic for a given OS), camera firmware (Mostly encapsulated in some standard middleware specification like openmax), same applies for imaging components and encoders etc.


So, in other words, the know how required to integrate and deploy system is well present within the system itself. As of now engineers are required to understand the interaction semantics and code the applications as per the different layers of software or interaction of devices. We argue that this could well be made to autonomous to learn the way of interactions at run time and then use these. This paper implements the system offering such functionality.
   
Another concern addressed by this design that a large portion of bugs in software arise due to lack of proper calls to various software stack layers in a complex systemStatistics suggest that out of total number of bugs a large portion of bugs which are found in the developmental phase arise due to wrongs actions performed by service user which are not possible according to the service provider software stacks. For example, A Openmax IL user can issue a command which is not allowed in the current state of the component. A lot of man hours are wasted in correcting these bugs. It has been observed in practice that the most of the software development cycle consists of resolving the bugs which are mainly caused by lack of synchronisation between the developers. Although, proper documentation is sought as a full proof solution but in practice it does not suggest so. The cost being the time taken by such adjustments which in turn translates in to delays and hamper time to market of the product. This could be avoided with a little overhead in the implementation.


The proposed technique utilises the current state information available with a software stack/device (say A) to suggest the actions to the software/device (say B) which is using the services offered by A. In most cases, A is a software with well defined interfaces and behaviour like OpenMax IL, OpenGL or any other protocol stacks etc. and B is an application or a middleware utilising the services offered by A.

Technique
As suggested earlier, the proposed technique utilises the information available with software and the action it is expecting or is ready to undertake. 


We have a software stack A which exposes to its user n interfaces
APIs : I = {i1,i2 .... iN}.


Each interface has say M(i,k) methods exposed where M(i,k) stands for k th method exposed by i the interface. Let's also suppose that any interface exposes a max of l methods.


At any given point of time there can be in one of m states.
States : S = {s1, s2, ... sM}


Each state comprises of allowed b calls to methods of these interfaces where B can vary from 0 to n*l.
C = {c1,c2 ... Ck}  : 0<= k <=n
which is a subset of I.

Cj  (j= 1 to k) is subset of M(i,k)


Wrong calls : Calls on which possibly error will occur in the system or the call that is not expected by the service provider.
Correct Calls : calls with no foreseeable error.
Total number of wrong calls (WC)  and correct calls (CC) which can arise in the system can be expressed as follows

WC = summation (for count = 1, count <=n) (s-count intersection !(c-count))
CC =  summation (for count = 1, count <=n) (s-count intersection (c-count))

If the software stack/device (A) could communicate with the user software/device (B) after each call suggesting what possible wrong calls and correct calls it can make. This implies that the service user has the information of possible calls of service provider it can make. Depending on the result service user needs to do its task it can call the required method of its interface. It is closely like a feedback closed loop (ODA Loop). But in this case the control depends not on outcome but the allowed calls reported by service provider and the result service user requires.

This could well be implemented using a callback returning a bitmap after each call suggesting the allowed calls. This usage of bitmap for suggestions requires a small initialization overhead. The search of the supported calls in the same state basically denotes getting a bitmap from the software stack which can be accomplished in O(1) complexity using the bit manipulation instructions by almost all processors.

Using this callback,  Following are analysed in detail in light of the new proposed technique :
1. Memory footprint
2. Latency 
3. Number of man hours it would save in development
4. Increase in the debugging option


Feel free to add to the discussion.

No comments:

Post a Comment