2012년 1월 7일 토요일

Purify Linux - 사용법 개요

출처 : http://www.ibm.com/developerworks/rational/library/05/r-3120/index.html

간단한 프로그램 분석을 통해 다음 두가지를 보여주고자 합니다.
(1) 리눅스에 Purify가 제대로 설치되어 있는 지 확인합니다.
(2) 작고 간단한 프로그램이지만 에러가 발생하기가 쉬우며 어떤 수정이 필요한지 보여줍니다.

아래 conv.c는 화씨를 섭씨로 변경하는 C 프로그램으로 Kernighan 및 Ritchie의 The C Programming Language에 예시된 프로그램을 조금 변경한 것입니다.

#include <stdio.h> 
#include <stdlib.h> 
#include <malloc.h>
main() 
{
 int *fahr;
 fahr = (int *)malloc(sizeof(int));


 int celsius;
 int lower, upper, step;
 celsius = lower + 10;
 lower = 0; /* lower limit of temperature table */
 upper = 300; /* upper limit */
 step = 20; /* step size */


 *fahr = lower;
 while(*fahr <= upper){
   celsius = 5 * (*fahr - 32) / 9;
   printf("%d\t%d\n", *fahr, celsius);
   *fahr = *fahr + step;
 }


Purify를 이용해 이 프로그램을 분석하기 위해, 링크 커맨드 앞에 purify를 추가함으로써 프로그램에 삽입을 합니다. Purify 분석 메세지를 상세화하기 위해서 -g 옵션을 추가할 수 있습니다:
purify gcc [-g] conv.c


그림 1: gcc 컴파일러와 Purify를 이용한 conv.c 프로그램 빌드 및 삽입



만약 프로그램 컴파일 및 링크를 단계별로 수행할 경우에는 링크 라인에만 purify를 명시합니다:
컴파일 라인: gcc -c [-g] conv.c
링크 라인: purify gcc [-g] conv.o


Rational PurifyPlus for Linux and UNIX에서 지원하는 컴파일러 목록은 아래와 같습니다.
출처: http://www-01.ibm.com/software/awdtools/purifyplus/unix/sysreq/

Operating SystemSoftwareHardware
Solaris® 10 base through 5/09
Solaris 9 base through 9/05
Solaris 8 base through 2/04
Sun C/C++ 5.3 through 5.10
GNU gcc/g++ 4.0 through 4.4
GNU gcc/g++ 3.0 through 3.4
Sun UltraSPARC®
Solaris 10 6/06 through 5/09Sun C/C++ 5.8 through 5.10
GNU gcc/g++ 4.0 through 4.4
GNU gcc/g++ 3.4
AMD64™
Intel® 64
RHEL 5 (Server/Desktop) base through 5.4
RHEL 4 (AS/ES/WS) base through 4.8
RHEL 3 (AS/ES/WS) base through U9
SLES 11 base
SLES 10 base through SP2
SLES 9 base through SP4
GNU gcc/g++ 4.0 through 4.4
GNU gcc/g++ 3.2 through 3.4
Intel icc 11.0
Intel icc 10.1
Intel IA-32
RHEL 5 (Server/Desktop) base through 5.4
RHEL 4 (AS/ES/WS) base through 4.8
SLES 11 base
SLES 10 base through SP2
SLES 9 base through SP4
GNU gcc/g++ 4.0 through 4.4
GNU gcc/g++ 3.2 through 3.4
Intel icc 11.0
Intel icc 10.1
AMD64
Intel 64
AIX® 6.1 base through TL3
AIX 5L v5.3 TL5 through TL9
IBM® XL C/C++ 10.1
IBM XL C/C++ 9.0
IBM XL C/C++ 8.0
IBM XL C/C++ 7.0
GNU gcc/g++ 3.4
IBM POWER4
IBM POWER5
IBM POWER6


7.0.1.0-002


Solaris
  • Solaris 10 update 8 
  • Solaris Studio 12.1 
  • gcc 4.5 
  • gdb 6.8, 6.9, 7.0, 7.1 
Linux
  • Red Hat Enterprise Linux 5.5 (Server/Desktop) 
  • SUSE Linux Enterprise Server 10 SP3 
  • gcc 4.5 
  • gdb 6.8, 6.9, 7.0, 7.1 
  • icc 11.1 
AIX
  • AIX 6.1 TL4 
  • AIX 5.3 TL10, TL11 


이제 삽입된 프로그램을 실행하여 결과를 보도록 하겠습니다. 실행 결과가 그림 2처럼 커맨드 라인에 표시되는 동안, Purify는 분석 결과를 그림 3처럼 윈도우에 표시를 합니다.


그림 2: 프로그램 실행 출력 화면



그림 3: 프로그램 실행 분석 화면


탐지된 에러와 누수 정보를 보다 자세히 보려면 그림 4처럼 해당 메뉴항목을 확장합니다. Purify는 문제가 발생한 지점을 자세히 보여줍니다.

그림 4: 문제에 대한 상세 보기 (-g 옵션을 사용하지 않은 경우)



그림 4-1: 문제에 대한 상세 보기 (-g 옵션을 사용한 경우)


문제 해결에 필요한 도움말이 필요할 경우에는 에러(예 그림 4, "UMR: Uninitialized memory read")를 선택한 후 물음표를 클릭합니다.혹은 Actions 메뉴에서 그림 5처럼 Explain message를 선택합니다.

그림 5: Actions 메뉴의 메뉴 항목


그림 6처럼 UMR 에러 메시지 관련 정보가 나타납니다.

그림 6: UMR 에러 메세지에 대한 설명

도움말을 통해서, Purify가 탐지한 문제에 대한 해결책을 아래 처럼 결정할 수 있습니다:
  • 정의되지 않은 lower 변수 값을 사용하는 celsius 변수에 대한 assignment 라인을 삭제합니다. (celsius assignment 라인을 lower 변수 정의 이후로 옮길 수도 있지만 여기선  그럴 필요가 없습니다) 이는 Purify가 분석한 UMR 문제를 해결합니다.
     celsius = lower + 10;
  • fahr 변수에 대한 free 함수를 호출하여 fahr 변수에 할당된 메모리를 해제합니다. 이는 Purify가 분석한 메모리 누수 MLK를 해결합니다. 
문제를 해결한 뒤, Purify를 다시 적용하여 문제가 처리되었는 지 확인하십시요. 그림 7처럼 문제 해결을 확인할 수 있습니다.


그림 7: 문제 해결 후의 Purify 분석 결과 화면

에러가 해결되었고 메모리 누수를 잡았습니다. 프로그램은 이제 좋은 상태입니다.

Linked Data

출처 : http://www.ibm.com/developerworks/rational/library/basic-profile-linked-data/index.html

There is interest in using Linked Data technologies for more than one purpose. We have seen interest in it to expose information -- public records, for example -- on the Internet in a machine-readable format. The IBM® Rational® team has been using Linked Data as an architectural model and implementation technology for application integration.

We would like to share information about how we are using these technologies, the best practices and anti-patterns that we have identified, and the specification gaps that we have had to fill. These best practices and anti-patterns can be classified according to (but are not limited to) the following categories:
  • Resources
A summary of the HTTP and RDF standard techniques and best practices that you should use, and anti-patterns you should avoid, when constructing clients and servers that read and write Linked Data 
  • Containers
Defines resources that allow new resources to be created using HTTP POST and existing resources to be found using HTTP GET 
  • Paging
Defines a mechanism for splitting the information in large resources into pages that can be fetched incrementally 
  • Validation
Defines a simple mechanism for describing the properties that a particular type of resource must or may have

The following sections provide details regarding this proposal for a Basic Profile for Linked Data.

Basic Profile Resources

Basic Profile Resources are HTTP Linked Data resources that conform to simple patterns and conventions. Most Basic Profile Resources are domain-specific resources that contain data for an entity in a domain. All Basic Profile Resources follow the rules of Linked Data:
  1. Use URIs as names for things. 
  2. Use HTTP URIs so that people can look up those names. 
  3. When someone looks up a URI, provide useful information, using the standards (RDF*, SPARQL). 
  4. Include links to other URIs so that people can discover more things.
Basic Profile adds a few rules. Some of these rules could be thought of as clarification of the basic Linked Data rules.
  1. Basic Profile Resources are HTTP resources that can be created, modified, deleted and read using standard HTTP methods.
    Basic Profile Resources are created by HTTP POST (or PUT) to an existing resource, deleted by HTTP DELETE, updated by HTTP PUT or PATCH, and "fetched" using HTTP GET. Additionally, Basic Profile Resources can be created, updated, and deleted by using SPARQL Update.
  2. Basic Profile Resources use RDF to define their states.
    The state of a Basic Profile Resource (in the sense of state used in the REST architecture) is defined by a set of RDF triples. Binary resources and text resources are not Basic Profile Resources since their states cannot be easily or fully represented in RDF. XML resources might or might not be suitable as Basic Profile Resources. Some XML resources are really data-oriented resources encoded in XML that can be easily represented in RDF. Other XML documents are essentially marked up text documents that are not easily represented in RDF. Basic Profile Resources can be mixed with other resources in the same application.
  3. You can request an RDF/XML representation of any Basic Profile Resource.The resource might have other representations, as well. These could be other RDF formats, such as Turtle, N3, or NTriples, but non-RDF formats such as HTML and JSON would also be popular additions, and Basic Profile sets no limits.
  4. Basic Profile clients use Optimistic Collision Detection during update.
    Because the update process involves getting a resource first, and then modifying it and later putting it back on the server, there is the possibility of a conflict (for example, another client might have updated the resource since the GET action). To mitigate this problem, Basic Profile implementations should use the HTTP If-Match header and HTTP ETags to detect collisions.
  5. Basic Profile Resources use standard media types.
    Basic Profile does not require and does not encourage the definition of any new media types. A Basic Profile goal is that any standards-based RDF or Linked Data client be able to read and write Basic Profile data, and defining new media types would prevent that in most cases.
  6. Basic Profile Resources use standard vocabularies.
    Basic Profile Resources use common vocabularies (classes, properties, and so forth) for common concepts. Many websites define their own vocabularies for common concepts such as resource type, label, description, creator, last modification time, priority, enumeration of priority values, and so on. This is usually viewed as a good feature by users who want their data to match their local terminology and processes, but it makes it much harder for organizations to subsequently integrate information in a larger view. Basic Profile requires all resources to expose common concepts using a common vocabulary for properties. Sites can choose to additionally expose the same values under their own private property names in the same resources. In general, Basic Profile avoids inventing property names where possible. Instead, it uses ones from popular RDF-based standards, such as the RDF standards themselves, Dublin Core, and so on. Basic Profile invents property URLs where no match is found in popular standard vocabularies.
  7. Basic Profile Resources set rdf:type explicitly.
    A resource's membership in a class extent can be derived implicitly or indicated explicitly by a triple in the resource representation that uses the rdf:type predicate and the URL of the class or derived implicitly. In RDF, there is no requirement to place an rdf:type triple in each resource, but this is a good practice, because it makes a query more useful in cases where inferencing is not supported. Remember also that a single resource can have multiple values for rdf:type. Basic Profile sets no limits to the number of types a resource can have.
  8. Basic Profile Resources use a restricted number of standard data types.
    RDF does not define data types to be used for property values, so Basic Profile lists a set of standard datatypes to be used in Basic Profile.
  9. Basic Profile clients expect to encounter unknown properties and content.
    Basic Profile provides mechanisms for clients to discover lists of expected properties for resources for particular purposes, but it also assumes that any given resource might have many more properties than those listed. Some servers will support only a fixed set of properties for a particular type of resource. Clients should always assume that the set of properties for a resource of a particular type at an arbitrary server might be open, in the sense that different resources of the same type might not all have the same properties, and the set of properties that are used in the state of a resource is not limited to any predefined set. However, when dealing with Basic Profile Resources, clients should assume that a Basic Profile server might discard triples for properties when it has prior knowledge. In other words, servers can restrict themselves to a known set of properties, but clients cannot. When doing an update using HTTP PUT, a Basic Profile client must preserve all property values retrieved by using HTTP GET. This includes all property values that it doesn't change or understand. (Use of HTTP PATCH or SPARQL Update rather than HTTP PUT for updates avoids this burden for clients.)
  10. Basic Profile clients do not assume the type of a resource at the end of a link.
    Many specifications and most traditional applications have a "closed model," by which we mean that any reference from a resource in the specification or application necessarily identifies a resource in the same specification (or a referenced specification) or application. In contrast, the HTML anchor tag can point to any resource addressable by an HTTP URI, not just other HTML resources. Basic Profile works like HTML in this sense. An HTTP URI reference in one Basic Profile Resource can, in general, point to any resource, not just a Basic Profile Resource. There are numerous reasons to maintain an open model like HTML's. One is that it allows data that has not yet been defined to be incorporated in the web in the future. Another reason is that it allows individual applications and sites to evolve over time. If clients assume that they know what will be at the other end of a link, then the data formats of all resources across the transitive closure of all links must be kept stable for version upgrade. A consequence of this independence is that client implementations that traverse HTTP URI links from one resource to another should always code defensively and be prepared for any resource at the end of the link. Defensive coding by client implementers is necessary to allow sets of applications that communicate through Basic Profile to be independently upgraded and flexibly extended.
  11. Basic Profile servers implement simple validations for Create and Update.
    Basic Profile servers should try to make it easy for programmatic clients to create and update resources. If Basic Profile implementations associate a lot of very complex validation rules that need to be satisfied for an update or creation to be accepted, it becomes difficult or impossible for a client to use the protocol without extensive additional information specific to the server that needs to be communicated outside of the Basic Profile specifications. The recommended approach is for servers to allow creation and updates based on the sort of simple validations that can be communicated programmatically through a Shape (see the Constraints section). Additional checks that are required to implement more complex policies and constraints should result in the resource being flagged as requiring more attention, but should not cause the basic Create or Update action to fail.
  12. Basic Profile Resources always use simple RDF predicates to represent links.
    By always representing links as simple predicate values, Basic Profile makes it very simple to know how links will appear in representations and also makes it very simple to query them. When there is a need to express properties on a link, Basic Profile adds an RDF statement with the same subject, object, and predicate as the original link, which is retained, plus any additional "link properties." Basic Profile Resources do not use "inverse links" to support navigation of a relationship in the opposite direction, because this creates a data synchronization problem and complicates a query. Instead, Basic Profile assumes that clients can use queries to navigate relationships in the opposite direction from the direction supported by the
    underlying link.

2012년 1월 6일 금요일

RQM - DOORS Integration Architecture

as-is




to-be



-

Analytic Reporting & Live Reporting

출처 : https://jazz.net/wiki/bin/view/Main/LinkedLifecycleData

Linked Data is a design for sharing decentralized, but interrelated, data on the Web.


At any given point in the development lifecycle it is of great interest to understand how requirements, source code, test cases, and defects are related. Typical questions are,
  • "Which requirements don't have any test cases?" and
  • "How many defects are unresolved for each requirement?" 

Design 1.

One design for integrating this lifecycle data is to develop relational models for each type of development artifact and then store the corresponding relational representations of all the artifacts in a data warehouse where they can be queried, reported on, and analyzed using conventional business intelligence tools such as BIRT and Cognos.

Analytical Reporting



Design 2.

Linked Data offers an alternative way to solve the data integration problem. The key advance here is that Linked Data provides a uniform way to identify artifacts, namely HTTP Uniform Resource Identifiers (URI), and a common data model and representation format for them, Resource Description Framework (RDF). Data integration among mutliple sources of development artifacts is achieved by loading the RDF representations of all the development artifacts into a shared triple store, e.g. Jena, which can be queried using the powerful SPARQL query language.

The main tiers of this architecture are as follows:
  • Data Source Tier - CLM Development Tools that provide Linked Data
  • Reporting and Query Service Tier - Indexer, RDF Triple Store (Jena/TDB), SPARQL Endpoint
  • Presentation Tier - Business Intelligence reporting and analysis (Cognos, BIRT), Document generation (RPE), Faceted browsing 
Live Reporting

-

Indexing & Query

1. 다양한 형태의 resource는 storage에 저장됩니다.

2. property indexing은 저장된 resource로 부터 정보를 추출하여 공통 RDF 형태로 storage에 저장합니다.

3. query는 공통 RDF 형태에 대해 질의하여 결과를 리턴합니다.

예) 내가 작성(공통 property dc:creator에 기반)한 resource(요구사항/작업항목/테스트케이스/...)에 대한 query

예) change set에 link(다양한 storage에 저장된 resource간의 link에 기반)된 resource(작업항목/요구사항/테스트케이스/...)에 대한 query

4. JFS는 두 종류의 index를 빌드합니다. 하나는 property index이고 다른 하나는 full text index입니다.

5. Property 기반 indexer는 resource로 부터 구조화된 property를 추출하여 SQL 비슷한 방식의 query (structured query)를 사용할 수 있도록 합니다.

6. Text indexer는 resource로 부터 text를 추출하여 Apache Lucene 엔진에 제공하여 full text search ("fuzzy" query)를 제공합니다.

7. repotools -reindex는 offline server에 대해 수행됩니다. 기본적으로 query triple store와 Lucene text store를 빌드합니다. 기본적으로 resource의 최근 버전에 대해서만 빌드합니다. 전체 버전에 대한 빌드는 시간비용이 많이 소요되기 때문입니다.

참고 : Jazz Integration Architecture ( https://jazz.net/projects/DevelopmentItem.jsp?href=content/project/plans/jia-overview/index.html )



RDF & SPAQL

출처 : http://www.w3.org/TR/rdf-concepts/

The Resource Description Framework (RDF) is a framework for representing information in the Web.

The underlying structure of any expression in RDF is a collection of triples, each consisting of a subject, a predicate and an object. A set of such triples is called an RDF graph.


  • a subject
  • an object, and 
  • a predicate (also called a property) that denotes a relationship


RDF 예

<http://reqs.com/req/1234>   rdf:type                oslc_rm:Requirement
<http://reqs.com/req/1234>   dcterms:title           "Smooth upgrade path"
<http://reqs.com/req/1234>   oslc_rm:elaboratedBy    <http://reqs.com/req/7772>
<http://reqs.com/req/1234>   oslc_rm:validatedBy     <http://tests.com/test/521>


<http://tests.com/test/521>   rdf:type                oslc_qm:TestCase
<http://tests.com/test/521>   dcterms:title           "Verify compatibility"
<http://tests.com/test/521>   oslc_qm:usesTestScript  <http://tests.com/script/13>


SPARQL is standard query language for RDF datasets

SPARQL 예

SELECT ?uri ?title WHERE {
   ?uri rdf:type              oslc_rm:Requirement .
   ?uri dcterms:title         ?title .}

결과


uri title
<http://reqs.com/req/1234> "Smooth upgrade path"
-

2012년 1월 5일 목요일

TESTRT - Using Eclipse CDT on Windows

출처 : http://www.ibm.com/developerworks/opensource/library/os-eclipse-stlcdt/


Get products and technologies
  • Learn about MinGW, the GNU C/C++ tools for Windows included with Cygwin.
  • Download Cygwin a Linux-like environment for Windows. It consists of two parts: A DLL that acts as a Linux API emulation layer providing substantial Linux API functionality and a collection of tools that provide a Linux look and feel.
  • Once you're done installing, you'll need to add gcc, g++, make, and GDB to your path. (예 : ;C:\MinGW\msys\1.0\bin;C:\MinGW\bin )
  • The Eclipse C/C++ Development Toolkit (CDT) download information contains the latest information about the available versions of CDT. (TestRT 이클립스 클라이언트 사용시 설치 필요없음)

(1) New C project




(2) New > Source Folder.


(3) New > Source File



(4) Project > Build Project


(5) Run > Run Configurations
- 실행할 바이너리를 선택한 후 Apply를 함

- run 버튼을 클릭하여 실행함
- 콘솔의 실행결과를 확인함


(6) C Project를 TestRT Project로 변경

- TDP를 선택합니다.


(7) 프로젝트를 빌드합니다. (Active Configuration은 Debug에서 TestRT로 바뀐 상태임)


(8) 실행파일을 우 클릭 후 Run As > Run instrumented application으로 실행합니다.

(9) 실행결과를 클릭하여 coverage report를 봅니다.


옵션 : 다른 리포트를 보고자 하는 경우에는 TestRT build configuration을 수정합니다.
- 프로젝트 속성에서 다른 리포트를 선택합니다.


- build clean, build project, run instrumented application을 실행합니다.

- 결과를 보기 위해 command windows에서 studio를 실행합니다.


- 결과 리포트를 열람합니다.


-

2012년 1월 4일 수요일

RTC Project Area 목록을 구하기 위한 OSLC 코드

출처 : https://jazz.net/forums/viewtopic.php?t=12745


Http http = new Http();
HttpRequest req = new HttpRequest();

// get the root document link to the catalog

req.setEndpoint(settings[0].ServerAddress__c+settings[0].BaseDocument__c);    
req.setMethod('GET');
res = http.send(req);
if(res.getStatusCode()==200)
{
// get the providers element
Dom.XMLNode providers = res.getBodyDocument().getRootElement().getChildElement('cmServiceProviders', 'http://open-services.net/xmlns/cm/1.0/');
// and the catalog URL
String attr=providers.getAttributeValue('resource','http://www.w3.org/1999/02/22-rdf-syntax-ns#');
req.setEndpoint(attr);
req.setMethod('GET');
// set the userid/pw to operate under.
Blob headerValue = Blob.valueOf(settings[0].UserName__c + ':' + settings[0].UserPW__c);
String authorizationHeader = 'BASIC ' + EncodingUtil.base64Encode(headerValue);
req.setHeader('Authorization', authorizationHeader);
// send the request
res = http.send(req);
 
// should come back with an authorization request redirect (302)
if(res.getStatusCode()==302)
{                      
// get the sessionid string from the returned cookies
String[] cookies = res.getHeader('Set-Cookie').split(';');
String sessionid='';
for(Integer i=0;i<cookies.size();i++)
{
if (cookies[i].startswith('JSESSIONID'))
{    
sessionid=cookies[i];                
break;
}
}

// set the redirect endpoint
req.setEndpoint(res.getHeader('Location'));
// and the session id
req.setHeader('Cookie',sessionid);
req.setMethod('GET');
res = http.send(req);
//
if(res.getStatusCode()==200)
{
//  we will have to deal with form logon later
String pw = settings[0].SecurityString__c;
pw=pw.replaceFirst('uname',settings[0].Username__c);
pw=pw.replaceFirst('upw',settings[0].Userpw__c);
req.setEndpoint(settings[0].ServerAddress__c + pw);
//req.setHeader('Referer',res.getHeader('Location'));
req.setMethod('GET');
req.setHeader('ContentType','application/x-www-form-urlencoded');
req.setHeader('Cookie',sessionid);
res = http.send(req);                    
}
// spin thru the remaining redirects..
while(res.getStatusCode()==302)
{                    
// redirect after login
req.setEndpoint(res.getHeader('Location'));
req.setHeader('Cookie',sessionid);
req.setMethod('GET');
res = http.send(req);
}
// we should have the project list document now
System.debug(res.getHeaderKeys());
System.debug(res.getBody());
// find the first project 'entry' node
Dom.XMLNode project = res.getBodyDocument().getRootElement().getChildElement('entry','http://open-services.net/xmlns/discovery/1.0/');
do
{
Dom.XMLnode provider = project.getChildElement('ServiceProvider','http://open-services.net/xmlns/discovery/1.0/');
String projectName =provider.getChildElement('title','http://purl.org/dc/terms/').getText();
String projectServicesURL = provider.getChildElement('services','http://open-services.net/xmlns/discovery/1.0/').getAttributeValue('resource','http://www.w3.org/1999/02/22-rdf-syntax-ns#');                                                          
req.setEndpoint(projectServicesURL);
req.setHeader('Cookie',sessionid);
req.setMethod('GET');
res = http.send(req);
// get the services list document
if(res.getStatusCode()==200)
{
DOM.XMLNode serviceslist = res.getBodydocument().getRootElement();
//System.debug(serviceslist);
Dom.XMLNode changeRequests = serviceslist.getChildElement('changeRequests','http://open-services.net/xmlns/cm/1.0/');
//System.debug(changeRequests);
Dom.XMLNode workitemFactory = changeRequests.getChildElement('factory','http://open-services.net/xmlns/cm/1.0/');
String workitemCreateUrl;
if(workitemFactory.getAttributeValue('default','http://open-services.net/xmlns/cm/1.0/')=='true')
{
workitemCreateUrl = workitemFactory.getChildElement('url','http://open-services.net/xmlns/cm/1.0/').getText();
system.debug(workitemCreateUrl);
}
String workitemQueryURL = (changeRequests.getChildElement('simpleQuery','http://open-services.net/xmlns/cm/1.0/')).getChildElement('url','http://open-services.net/xmlns/cm/1.0/').getText();
//system.debug(workitemQueryUrl);
//System.debug(workitemfactory);
ExportValue e=new ExportValue(projectName, workitemCreateUrl, workitemQueryURL);
result.add(e);
}
// find the projects parent node
Dom.XMLNode parent = project.getparent();
// remove the project node from the document
parent.removeChild(project);
// find the next project entry
project = parent.getChildElement('entry','http://open-services.net/xmlns/discovery/1.0/');
} while (project!=null);
}
}

2012년 1월 2일 월요일

CCCQ8 CM Architecture Change


  • 간단해진 CCRC WAN Server Version 8 구조

    • 버전8로 오면서 CCRC WAN Server의 성능과 확장성이 배가되었습니다.
    • UCM/CQ 통합시 OSLC(HTTP연결)를 사용하며 CQIntSrv를 사용하지 않음
    • Base CC/CQ 통합시에는 기존의 방식을 사용함
  • CQ CM Server (7.x와 동일)
    • -
    • -