Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
2008 Q4 CONFIGURATION MANAGEMENT
#5
(26-05-2016, 09:00 AM)dorothy.pipet Wrote: Another attempt for comments please

More substantial than previous attempts and ok as far as it went- however I don't think this was far enough.  Hence have taken the time to indicate the areas which I think were weak or missing entirely and attempt to explain the background to the sort of things that should have been incorporated; this is quite a long response but is not per se intended to be a direct answer to the question set but hopefully gets you to realise where you could have answered better.

Thought your choice of the Smartlock interlocking was  an excellent example of a system to explain why configuration management is necessary and can be managed, but unfortunately did not exploit it for all that it was worth.

To me you discussed version control and change management a bit too much; certainly there is some relevance, but I feel the core of the question was to consider how the various elements which comprise the system need to be COMPATIBLE to operate together successfully- though I must admit that such a response might skew the answer overly to module 7, so there is definitely a need to emphasise the SAFETY impact rather than the system just not working. 

That is indeed why the Smartlock system is potentially such a good example:
  1. there are a large number of separate software components (just compare the quantity of information included within a release Baseline compared to that of the SSI CISR) that make up a system, let alone the definition of all the interfaces with other Smartlock interlockings, the Control System etc.
  2. As an interlocking, it is easy to find examples where small mis-matches can lead to very significant accidents so clearly Safety Critical.
You are right though not to concentrate exclusively on software. 
Certainly I would have used the example of the Stockley incident some years ago which resulted in the lack of effective locking on points associated with a swinging overlap. 
  • The scenario was created by a change in methodology over the years of how cross-boundary data should be written and the desire to implement the new alteration to the current standards, yet leave much of the data untouched and thus to the old standard.  
  • Either method is acceptable and the “first testing pass” data for the various routes were undertaken compatibly in both interlockings; however in the course of addressing a Test Log a designer decided to correct what they saw as the anomaly of certain data in their interlocking being implemented differently and brought it into compliance with the modern standard, albeit this was contrary to the original intention.
  • Even more regrettably the implications of this was not recognised; the data within the neighbouring interlocking was not also amended and thus was incompatible. This led to a set of cross-boundary points being called to their other lie, without an availability check having been performed in either of the interlockings.  The consequences could have been extremely severe- points moving under a train on a stretch of 4-line railway of high permissible speed and so a multiple train head-on collision could easily have resulted.
This example does illustrate that Change Control is one of the pre-requisites of Configuration Management, but the most relevant thing for answering this question is the issue of compatibility; it did not matter whether interlocking 1 or interlocking 2 which were straddled by the points tested them for availability, but it was crucial that one of them did so and the inconsistent methodologies adopted resulted in the total lack of locking in a specific scenario (yet was otherwise effective and hence the problem was not uncovered by testing undertaken strictly in conformance with Works Testing Handbook). This incident is actually one of several which has occurred with swinging overlaps implemented in SSI style data; it is far too easy for people to get things subtly wrong. One of the mitigations since put into place is the running of computer test scripts to exercise the data as an enhanced “Rogue Point Test”, which I regard as a pragmatic response but a tacit admission that our other processes are insufficient to prevent such a problem arising in the first place!

The first part of the question asked why CM was necessary through the whole life-cycle; your answer seemed quite weak on this- it did little to explain the WHY and definitely the LIFE-CYCLE (note for the system life, not just the duration of project implementation) was very scantily treated.  OK you mentioned design and test but really just focusing on the implementation of a project; you could have discussed: 
  • Defining the client’s requirements for what the project is intended to achieve,
  • Getting the locations built to an initial design (i.e. before it achieved AFC status and therefore perhaps before an IDC held so may be the original version of the TFM allocations might need to get altered once the niceties of the data implications became clear- an incident comes to mind that involved the original plane of a new TFM for additional TPWS changing to use that new TFM to reallocate the ATP bit so as to free up a bit on the signal’s own TFM for the TPWS bit comes to mind)’
  • Correlation status of the existing wiring, equipment mod state and software versions at any interface
  • Transferring Inherited SSI data into Smartlock, 
  • Subsequent alterations to the data of a previously commissioned Smartlock,
  • Constant refinement of the SSI DIS papers that give the guidance of the data structures to be used,
  • Implications of interlocking changes upon the data for the technician’s support system,
  • Compatibility of Smartlock with SSI TFMs of various mod states,
  • Ensuring that the testing is undertaken on a known compatible system (data testing generally performed off-site and the likelihood is that the version of the hardware / software for the generic product will continually evolve, so that the “target system” supplied for different projects will differ and so the off-site testing rig must be known to be compatible with the one that it is emulating), 
  • the fact that the version of the data upon which a tester has written a Test Log may not be the current version that the designer is working upon, 
  • Control of the USB sticks (data issue to offsite and on-site testers- who may at times deliberately be using different versions for Principles testing and for Through testing) and then those to be placed within the commissioned equipment and those given as maintainers’ spares holding (which also needs to have the relevant hardware of correct mod status etc.),
  • Often a series of stage work versions of data is needed- perhaps built upon each other or individually backstage do from “final data” and that the need to make a change in one of them might actually need to be similarly replicated across a whole set of them,
  • Obsolescence management when perhaps the software may need to get transferred to new hardware, or the external TFMs eventually get replaced by the new generation of I/O.
You needed to get across the message that CM is more than bureaucratic form filling and that it is an essential underlying element for any form of quality management and assurance process. 
The essence is controlling the COMBINATION of things to define exactly the system being considered. 
Lifecycle starts at the first definition and continues until disposal 
  • as-planned, 
  • as-designed, 
  • as-built, 
  • as-evolved during support phase.
CM has 4 key elements: 
  • Identification 
  • Change Management 
  • Status Accounting (i.e. which changes have been incorporates, which are approved but not yet incorporated, which are currently potential changes being assessed etc.)
  • Audit.
   
I thought your answer was better regarding HOW managed, but did seem to biased towards document version control rather than checking software versions, check-sums, CISR / baselines.  
In essence any CM plan needs 
  • to identify which of all the various input documents and products of the design process actually need to be under configuration control and define as “Configuration Items”,
  • to control changes to these CI,
  • to record any changes to these CI,
  • to assess the impact of any such change on anything else,
  • to recognise that at any time there may be several extant versions of an element being used by different people for different purposes on a project, or from the version being used on this project compared to others used for different projects elsewhere.
I didn’t really understand your 2nd bullet (perhaps the first word is missing and I read the second as “hint” but perhaps it was “Maint”…).

Not sure either about your comment of new TFMs; as I understand it NR have over recent years become increasingly concerned about the current loop from a DLM being kept short and therefore contained within a single location or rack within an REB (I don’t think based on new TFMs, not actually sure that there is any evidence emerged from the decades of previous practice that forms a basis for the decision and I don’t see it as a CM issue; however if you are right that new TFMs are less resilient than older versions then indeed this would certainly be a compatibility and therefore a CM consideration.

Part b 
was tackled reasonably, but I think the fact that there is a version control and formal record of changes as the design evolves towards approval is far more important from a CM perspective than the traceability of the individuals involved.  
Don’t forget also the various production aids, test tools, simulators and suchlike used for the verification and validation. Similarly the training courses and support manuals, the standard spares list and a host of other documentation must be kept in step- not only to ensure that the correct information is supplied to the client upon original commissioning, but also continual reviewed so that it stays appropriate.  This of course does NOT mean always updated to the latest version of the product since this would be incorrect for the specific installation; however when hardware is raised a mod state or indeed replaced by obsolescence then the installed system must continue to be supported. 
Similarly if the system software (as opposed to the site specific data) is upgraded to resolve an issue, then a decision needs to be made whether this change is to be made at the site or whether it is to retain its original version; of course any incompatibility issues must be anticipated.  Over time the particular mix of standards, hardware components and software components at the various installations of what is ostensibly one product will diverge and can become a CM nightmare; this is one driver to bring particular sites into conformance with one of a limited range of possible combinations. In the commercial software world, one sees this in the likes of Microsoft trying to force consumers onto their most recent version of Windows as don’t want to support the legacy versions; the relationship between a railway and its supply chain is somewhat different and products contractually do have to be supported for typically 25-30 years and CM is very important for this- none of your answer appeared to address this.


I think that this “continued support of installed base of similar but different” implementations was really the intention of the last part of the question; the only hint that your answer was attempting to address this was by discussing the changes in the post-construction phase with Test Logs and Mod Sheets.  This phase is important (how many times has some change that was intended to address one issue resulted in creation of a different, and sometime worse, issue?) and hence worth including somewhere, but I don’t think was what this last part was really seeking.


In summary, I think that your answer showed that your experience was as a Control Table and data designer and as a consequence you had interpreted the question too narrowly.  
You made some reasonable points and it was OK as far as it went, but was too limited. I think you didn’t mention the word “baseline” at all; version control is not just being certain to get the latest version of anything but to be able to get the relevant defined version which should be being used at that time by that process and this is often different.  I am sure that you know this (for example the various “drops” of Smartlock interlocking data that are given to Delta Rail as the supplier of control system to which it is to interface- they need to work on a series of known version with the changes between them identified rather than being told about each individual change in real time), but your answer really did not get that across.

I am sure that the examiners would encounter far worse attempts and this answer would therefore look relatively good by comparison; I think it would just about pass, being pulled through by the middle section.  I am sure that you could have done quite a lot better if you had embraced the whole question and hopefully the above can help you see that you could have done so.
PJW
Reply


Messages In This Thread
2008 Q4 CONFIGURATION MANAGEMENT - by hiteshp - 06-08-2010, 12:09 AM
RE: Attempt at 2008 Q 1-4 - by PJW - 06-08-2010, 07:14 AM
RE: Attempt at 2008 Q 1-4 - by PJW - 07-08-2010, 06:57 PM
RE: Attempt at 2008 Q4 - by dorothy.pipet - 26-05-2016, 09:00 AM
RE: Attempt at 2008 Q4 - by PJW - 30-05-2016, 10:50 AM

Forum Jump:


Users browsing this thread: 1 Guest(s)