Monday, November 23, 2015


XHTML is a separate language that began as a reformulation of HTML 4.01 using XML 1.0. It continues to be developed:

XHTML 1.0, published January 26, 2000 as a W3C Recommendation, later revised and republished August 1, 2002. It offers the same three flavors as HTML 4.0 and 4.01, reformulated in XML, with minor restrictions.

XHTML 1.1, published May 31, 2001 as a W3C Recommendation. It is based on XHTML 1.0 Strict, but includes minor changes, can be customized, and is reformulated using modules from Modularization of XHTML, which was published April 10, 2001 as a W3C Recommendation.

XHTML 2.0, is still a W3C Working Draft. XHTML 2.0 is incompatible with XHTML 1.x and, therefore, would be more accurate to characterize as an XHTML-inspired new language than an update to XHTML 1.x.

XHTML5, which is an update to XHTML 1.x, is being defined alongside HTML5 in the HTML5 draft.

HTML Element


Consider the following example :-



This is an HTML element:




The HTML element starts with a start tag: <b> The content of the HTML element is: This text is bold The HTML element ends with an end tag: </b>

The purpose of the <b> tag is to define an HTML element that should be displayed as bold.

This is also an HTML element:




This HTML element starts with the start tag <body>, and ends with the end tag </body>.

The purpose of the <body> tag is to define the HTML element that contains the body of the HTML document.


November 1995
HTML 2.0 was published as IETF RFC 1866. Supplemental RFCs added capabilities:

November 1995: RFC 1867 (form-based file upload)

May 1996: RFC 1942 (tables)

August 1996: RFC 1980 (client-side image maps)

January 1997: RFC 2070 (internationalization)
In June 2000, all of these were declared obsolete/historic by RFC 2854.

January 1997
HTML 3.2 was published as a W3C Recommendation. It was the first version developed and standardized exclusively by the W3C, as the IETF had closed its HTML Working Group in September 1997.

HTML 3.2 dropped math formulas entirely, reconciled overlap among various proprietary extensions, and adopted most of Netscape's visual markup tags. Netscape's blink element and Microsoft's marquee element were omitted due to a mutual agreement between the two companies. A markup for mathematical formulas similar to that in HTML wasn't standardized until 14 months later in MathML.

December 1997
HTML 4.0 was published as a W3C Recommendation. It offers three "flavors":

Strict, in which deprecated elements are forbidden,

Transitional, in which deprecated elements are allowed,

Frameset, in which mostly only frame related elements are allowed;
Initially code-named "Cougar", HTML 4.0 adopted many browser-specific element types and attributes, but at the same time sought to phase out Netscape's visual markup features by marking them as deprecated in favor of style sheets.

April 1998
HTML 4.0 was reissued with minor edits without incrementing the version number.

December 1999
HTML 4.01 was published as a W3C Recommendation. It offers the same three flavors as HTML 4.0, and its last errata were published May 12, 2001.

May 2000
ISO/IEC 15445:2000 ("ISO HTML", based on HTML 4.01 Strict) was published as an ISO/IEC international standard.

As of mid-2008, HTML 4.01 and ISO/IEC 15445:2000 are the most recent versions of HTML. Development of the parallel, XML-based language XHTML occupied the W3C's HTML Working Group through the early and mid-2000s.

Drafts

October 1991
HTML Tags, an informal CERN document listing twelve HTML tags, was first mentioned in public. November 1992.

July 1993
Hypertext Markup Language was published by the IETF as an Internet-Draft (a rough proposal for a standard). It expired in January 1994.

November 1993
HTML was published by the IETF as an Internet-Draft and was a competing proposal to the Hypertext Markup Language draft. It expired in May 1994.

April 1995 (authored March 1995)
HTML 3.0 was proposed as a standard to the IETF, but the proposal expired five months later without further action. It included many of the capabilities that were in Raggett's HTML proposal, such as support for tables, text flow around figures, and the display of complex mathematical formulas.

A demonstration appeared in W3C's own Arena browser. HTML 3.0 did not succeed for several reasons. The pace of browser development, as well as the number of interested parties, had outstripped the resources of the IETF. Netscape continued to introduce HTML elements that specified the visual appearance of documents, contrary to the goals of the newly-formed W3C, which sought to limit HTML to describing logical structure. Microsoft, a newcomer at the time, played to all sides by creating its own tags, implementing Netscape's elements for compatibility, and supporting W3C features such as Cascading Style Sheets.

January 2008
HTML5 was published as a Working Draft by the W3C.

Although its syntax closely resembles that of SGML, HTML 5 has abandoned any attempt to be an SGML application, and has explicitly defined its own "html" serialization, in addition to an alternative XML

HTML, an initializes for Hypertext Mark-up Language, is the predominant markup language for web pages. It provides a means to describe the structure of text-based information in a document—by denoting certain text as links, headings, paragraphs, lists, etc.—and to supplement that text with interactive forms, embedded images, and other objects. HTML is written in the form of "tags" consisting minimally of "elements" surrounded by angle brackets. HTML can also describe, to some degree, the appearance and semantics of a document, and can include embedded scripting language code (such as JavaScript) that can affect the behavior of Web browsers and other HTML processors. History of HTML.

Origins
In 1980, physicist Tim Berners-Lee, who was an independent contractor at CERN, proposed and prototyped ENQUIRE, a system for CERN researchers to use and share documents. In 1989, Berners-Lee and CERN data systems engineer Robert Cailliau each submitted separate proposals for an Internet-based hypertext system providing similar functionality. The following year, they collaborated on a joint proposal, the WorldWideWeb (W3) project, which was accepted by CERN. In his personal notes from 1990 he lists, "some of the many areas in which hypertext is used", and puts an encyclopaedia first.

First specifications
The first publicly available description of HTML was a document called HTML Tags, first mentioned on the Internet by Berners-Lee in late 1991. It describes 22 elements comprising the initial, relatively simple design of HTML. Thirteen of these elements still exist in HTML 4. HTML is a text and image formatting language used by web browsers to dynamically format web pages. The semantics of many of its tags can be traced to early text formatting languages such as that used by the RUNOFF command developed in the early 1960s for the CTSS (Compatible Time-Sharing System) operating system, and its formatting commands were derived from the commands used by typesetters to manually format documents.

Berners-Lee considered HTML to be, at the time, an application of SGML, but it was not formally defined as such until the mid-1993 publication, by the IETF, of the first proposal for an HTML specification: Berners-Lee and Dan Connolly's "Hypertext Markup Language (HTML)" Internet-Draft, which included an SGML Document Type Definition to define the grammar. The draft expired after six months, but was notable for its acknowledgment of the NCSA Mosaic browser's custom tag for embedding in-line images, reflecting the IETF's philosophy of basing standards on successful prototypes. Similarly, Dave Raggett's competing Internet-Draft, "HTML (Hypertext Markup Format)", from late 1993, suggested standardizing already-implemented features like tables and fill-out forms.

After the HTML and HTML drafts expired in early 1994, the IETF created an HTML Working Group, which in 1995 completed "HTML 2.0", the first HTML specification intended to be treated as a standard against which future implementations should be based. Published as Request for Comments 1866, HTML 2.0 included ideas from the HTML and HTML drafts. There was no "HTML 1.0"; the 2.0 designation was intended to distinguish the new edition from previous drafts.

Further development under the auspices of the IETF was stalled by competing interests. Since 1996, the HTML specifications have been maintained, with input from commercial software vendors, by the World Wide Web Consortium (W3C). However, in 2000, HTML also became an international standard (ISO/IEC 15445:2000). The last HTML specification published by the W3C is the HTML 4.01 Recommendation, published in late 1999. Its issues and errors were last acknowledged by errata published in 2001.

Sunday, November 22, 2015




The communications method in computing includes Local procedure calls and Remote procedure calls.

Remote Procedure Call

This is a protocol that one program can use to request the services from other located in other machine in a network without having to understand the network details. Usually when a program using RPC are compiled into an executable program, a stub is included which acts as the representative of remote procedure code. When the program is run, and the procedure issue the stub receives a request and forwards it to the client runtime in the local computer by the daemons. The client runtime program knows the address of the remote computer and server application. It then sends the request across the network .The server also have a runtime program and stub that interface with remote procedure. The result is returned the same way.

Local Procedure Call

A local procedure call (LPC) is an interprocess communication facility for high-speed message passing.

In Windows NT, client-subsystem communication happens in a fashion similar to that in the MACH operating system. Each subsystem contains a client-side DLL that links with the client executable. The DLL contains stub functions for the subsystem’s API. Whenever a client process–an application using the subsystem interface–makes an API call, the corresponding stub function in the DLL passes on the call to the subsystem process. The subsystem process, after the necessary processing, returns the results to the client DLL. The stub function in the DLL waits for the subsystem to return the results and, in turn, passes the results to the caller. The client process simply resembles calling a normal procedure in its own code. In the case of RPC, the client actually calls a procedure sitting in some remote server over the network–hence the name remote procedure call. In Windows NT, the server runs on the same machine; hence the mechanism is called as a local procedure call.

LPC is designed to allow three methods of exchanging messages:

Message that is shorter than 256 bytes can be sent by calling LPC with a buffer containing the message. That is a small message. This message is then copied from the address space of the sending process into system address space, and then to the address space of the receiving process.

If a client and a server want to exchange more than 256 bytes of data, they can choose to use a shared section to which both are mapped. The sender places message data in the shared section and then sends a small message to the receiver with pointers to where the data is to be found in the shared section.

When a server wants to read or write larger amount of data than will fit in a shared section, data can be directly read from or written to a client's address space. The LPC component supplies two functions that a server can use to accomplish this. A message sent by the first method is used to synchronize the message passing.
There are three types of LPC. The first type sends small messages up to 304 bytes. The second type sends larger messages. The third type of LPC is called as Quick LPC and used by the Win32 subsystem in Windows NT 3.51. The first two types of LPC use port objects for communication. Ports resemble the sockets or named pipes in Unix. A port is a bi-directional communication channel between two processes. However, unlike sockets, the data passed through ports is not streamed. The ports preserve the message boundaries. Simply put, you can send and receive messages using ports. The subsystems create ports with well-known names. The client processes that need to invoke services from the subsystems open the corresponding port using the well-known name. After opening the port, the client can communicate, with the server, over the port.

Client/server describes the relationship between two computer programs in which one program, the client, makes a service request from another program, the server, which fulfils the request. Although programs within a single computer can use the client/server idea, it is a more important idea in a network. In a network, the client/server model provides a convenient way to interconnect programs that are distributed efficiently across different locations. Computer transactions using the client/server model are very common.

For example, to check your bank account from your computer, a client program in your computer forwards your request to a server program at the bank. That program might in turn forward the request to its own client program that sends a request to a database server at another bank computer to retrieve your account balance. The balance is returned back to the bank data client, which in turn serves it back to the client in your personal computer, which displays the information for you.



The communications method in computing includes Local procedure calls and Remote procedure calls.

In a two tier architecture the workload is divided between the server (which hosts the database) and the client (which hosts the user interface).In reality these are normally located on separate physical machines but there is no absolute requirement for this to be the case.



The distribution of application logic and processing in this model was, and is, problematic. if the client is 'smart' and hosts the main application processing then there are issues associated with distributing, installing and maintaining the application because each client needs its own local copy of the software. If the client is 'dumb' the application logic and processing must be implemented in the database and then becomes totally dependent on the specific DBMS being used. in either scenario, each client must also have a log-in to the database and the necessary rights to carry out whatever functions are required by the application. However, the two tier client/server architecture proved to be a good solution when the user population work is relatively small (up to about 100 concurrent users) but it rapidly proved to have a number of limitations.

Performance: as the user population grows, performance begins to deteriorate. this is the direct result of each user having their own connection to the server which means that the server has to keep all these connections live (using "keep-alive" messages) even when no work is being done.

Security: Each user must have their own individual access to the database, and be granted whatever rights may be required in order to run the application. Apart from the security issues that this raises, maintaining users rapidly becomes a major task in its own right. This is especially problematic when new features/functionality have to be added to the application and users rights need to be updated

Capability: No matter what type of client is used, much of the data processing has to be located in the database, which means that it is totally dependent upon the capabilities, and implementation, provided by the database manufacturer. This can seriously limit application functionality because different databases support different functionality, use different programming languages and even implement such basic tools as triggers differently

Portability: Since the two-tier architecture is so dependent upon the specific database implementation, porting an existing application to a different DBMS becomes a major issue. This is especially apparent in the case of vertical market applications where the choice of DBMS is not determined by the vendor having said that, this architecture found a new lease of life in the internet age. It can work well in a disconnected environment where the ui is essentially dumb (i.e. a browser). however, in many ways this implementation harks back to the original mainframe architecture and indeed, a browser

Saturday, November 21, 2015


To move from one page of a document to another page, or to another document on the same or another Web site, the user clicks a hyperlink (usually just called a link) in the document shown in their Web client. Documents and locations within documents are identified by an address, defined as a Uniform Resource Locator, or URL. The following URL illustrates the general form:



www.sybase.com/productsl
                                      or
www.sybase.com/inc/corpinfo/mkcreate.html


URLs contain information about which server the document is on, and may also specify a particular document available to that server, and even a position within the document. In addition, a URL may carry other information from a Web client to a Web server, including the values entered into fields in an HTML form.

For more information about URLs and addresses on the Web, see the material on the World Wide Web Consortium pages, at the following address:

http://www.w3.org.com/pub/WWW/adressing

When a user clicks a link on a document on their Web client, the URL is sent to the server of the indicated Web site. The Web server locates the document, and sends the HTML to the Web client across the network.

The below figure illustrates, information is stored at Web sites. Access to the information is managed by a Web server for the site. Users access the information using Web clients, which are also called browsers.





Information on the Web is stored in documents, using a language called Information on the Web is stored in documents, using a language called HTML (HyperText Markup Language). Web clients must interpret HTML to be able to display the documents to a user. The protocol that governs the exchange of information between the Web server and Web client is named HTTP (HyperText Transfer Protocol). (HyperText Markup Language). Web clients must interpret HTML to be able to display the documents to a user. The protocol that governs the exchange of information between the Web server and Web client is named HTTP (HyperText Transfer Protocol).

External data in HTML documents
HTML documents can include graphics or other types of data by referencing an external file (for example, a GIF or JPEG file for a graphic). Not all these external formats are supported by all Web clients. When the document contains such data, the Web client can send a request to the Web server to provide the relevant graphic. If the Web client does not support the format, it does not request the information from the server.



Port Numbers
To identify a host machine, an IP address or a domain name is needed. To identify a particular server on a host, a port number is used. A port is like a logical connection to a machine. Port numbers can take values from 1 to 65,535. It has no correspondence with the physical connections, of which there might be just one. Each type of service has, by convention, a standard port number. Thus 80 usually means Web serving and 21 means file transfer. If the default port number is used, it can be omitted in the URL (see below). For each port supplying a service there is a server program waiting for any requests. Thus a web server program listens on port 80 for any incoming requests. All these server programs run together in parallel on the host machine.

When a packet of information is received by a host, the port number is examined and the packet sent to the program responsible for that port. Thus the different types of request are distinguished and dispatched to the relevant program.

The following table lists the common services, together with their normal port numbers. These conventional port numbers are sometimes not used for a variety of reasons. One example is when a host provides (say) multiple web servers, so only one can be on port 80. Another reason is where the server program has not been assigned the privilege to use port 80.

Sockets
A socket is the software mechanism for one program to connect to another. A pair of programs opens a socket connection between themselves. This then acts like a telephone connection - they can converse in both directions for as long as the connection is open. (In fact, data can flow in both directions at the same time.) More than one socket can use any particular port. The network software ensures that data is routed to or from the correct socket.

When a server (on a particular port number) gets an initial request, it often spawns a separate thread to deal with the client. This is because different clients may well run different speeds. Having one thread per client means that the different speeds can be accommodated. The new thread creates a (software) socket to use as the connection to the client. Thus one port may be associated with many sockets.

Streams
Accessing information across the Internet is accomplished using streams. A stream is a serial collection of data, such as can be sent to a printer, a display or a serial file. Similarly a stream is like data input from a keyboard or input from a serial file. Thus reading or writing to another program across a network is just like reading or writing to a serial file.

URL
A URL (Uniform Resource Locator) is:

a unique identifier for any resource on the Internet
typed into a Web browser
used as a hyperlink within a HTML document
quoted as a reference to a source

A URL has the structure:

protocol: //hostname[:port]/[pathname]/filename#section

The host name is the name of the server that provides the service. This can either be a domain name or an IP address.

The port number is only needed when the server does not use the default port number. For example, 80 is the default port number for HTTP.

Categories

Unordered List

Disini

Sample Text

Powered by Blogger.

Pages

Popular Posts

Recent Posts

Text Widget