Saturday, 31 December 2016

TELEPORT


TELEPORT



                   A telecommunications port—or, more commonly, teleport—is a satellite ground station with multiple parabolic antennas (i.e., an antenna farm) that functions as a hub connecting a satellite or geocentric orbital network with a terrestrial telecommunications network. Teleports may provide various broadcasting services among other telecommunications functions, such as uploading computer programs or issuing commands over an uplink to a satellite.




Modulation & Modulator

                     
         In the televisions systems, signals can be carried between a limited frequency spectrum, which a concrete lower and upper frequencies. A modulator is a device charged of transporting a signal inside another signal to be transmitted. Is able to transform a low-frequency signal into a other frequency signal. As a result of, the frequency can be controlled and the problem above can be solved. Mainly, the aim of modulate a signal consist in changing a parameter of the wave according to the variations of the modulation signal (information to be transmitted).
                         
                        There are four modulation modes available, with QPSK and 8PSK intended for broadcast applications in non-linear satellite transponders driven close to saturation. 16APSK and 32APSK, requiring a higher level of C/N, are mainly targeted at professional applications such as news gathering and interactive services.
                        Adaptive Coding and Modulation (ACM) allows the transmission parameters to be changed on a frame by frame basis depending on the particular conditions of the delivery path for each individual user. It is mainly targeted to unicasting interactive services and to point-to-point professional applications.
                         
                        DVB-S2 offers optional backwards compatible modes that use hierarchical modulation to allow legacy DVB-S receivers to continue to operate, whilst providing additional capacity and services to newer receiver.

Digital modulation is often a means of transmitting payload data from orbiting satellites to ground stations.  In one such approach, quadrature phase-shift-keying (QPSK) modulation provides both spectral and power efficiency. In a QPSK modulator, two data streams simultaneously modulate a carrier signal. For optimum use of available satellite power, an unbalanced QPSK (UQPSK) modulator is often used. The design and simulation for such a modulator will be shown here, along with methods for demodulating such signals.
Communications channels in satellite-communications (satcom) systems, as well as communications systems in general, can be categorized as being power- or bandwidth-limited. For power-limited communications channels, coding schemes are typically applied to save power at the expense of bandwidth. In bandwidth-limited channels, spectrally efficient modulation schemes are often used to conserve bandwidth. The prime goal of spectrally efficient modulation is to optimize bandwidth efficiency, which is defined as the ratio of the data rate to the channel bandwidth (in b/s/Hz). A secondary goal of such a modulation scheme is to achieve high bandwidth efficiency with minimum signal power.
                        QPSK modulation results in optimum use of both spectrum and power when the data rates for the two channels—or the in-phase (I) and quadrature (Q) signal components—are the same. But when two data streams at different bit rates from two independent payloads must be transmitted, the normal procedure of formatting the data for modulation onto the carrier becomes very complicated. The easiest way is to transmit the two different data streams on the two QPSK channels by direct modulation, which results in the two channels of the modulator having different data rates. The higher of the data rates determines the bandwidth required for the modulated carrier and the transmitter output power will be equally shared by the two different data streams.
                        To optimize available transmit power from a satcom system, an UQPSK modulator is proposed here, where the power level of the low-data-rate channel can be reduced to achieve the same carrier-to-noise (Eb/No) ratio for both channels. This can be achieved by unbalancing the amplitudes of the carrier’s signal components in the I and Q channels of a conventional QPSK modulator. The amplifier power following the modulator is shared between the I and Q channels in proportion to the amplitudes of the I and Q signal components of the modulated carrier.
                        Figure 1 shows a block diagram of the proposed UQPSK modulator. In a conventional QPSK modulator, the carrier source signal is divided equally by a 3-dB/90-deg. branch line hybrid coupler to yield two equal-amplitude carriers in quadrature. Both carriers are modulated with I and Q data streams using binary-phase-shift-keying (BPSK)modulation. The BPSK-modulated carriers are combined in an in-phase Wilkinson power combiner to produce a QPSK-modulated carrier. When data of different bit rates are to be modulated, an UQPSK modulator can be used.

                       

                        1. This block diagram represents the proposed UQPSK modulator.
                        In the case of an UQPSK modulator, for optimum utilization of power, an attenuator is added to the low-data-rate channel to reduce the power. A phase shifter is added tothe high-data-rate channel to compensate for the phase shift introduced by the attenuator in the low-data-rate channel. The resulting output isa UQPSK-modulated signal. The mainlobe bandwidth will be determined by the high-data-rate channel. Figure 2 shows a plot of the I and Q data, along with the simulated UQPSK spectrum.
                       

DIGITAL VIDEO BROADCASTING (DVB)


DIGITAL VIDEO BROADCASTING (DVB)

                   Once the video is encoded in the desired format (MPEG-2 or MPEG-4/H.264 AVC) will have to be put on the network to be distributed and transported to the end user. In this field there are different connections, satellite, terrestrial, cable, etc., Depending on the type of means used in transport will have different formats, but mostly in terrestrial digital television will use a terrestrial connection (DVB-T) mixed with a connection via satellite (DVB-S). These connections will be in charge of transmitting the digital signal to the end user, but first this signal must be created, since only at this point you have video signals, audio and data. Digital Video Broadcasting (DVB) has become the synonym for digital television and for data broadcasting world-wide. DVB services have recently been introduced in Europe, in North- and South America, in Asia, Africa and Australia.DVB is the technology that makes possible the broadcasting of “data containers“ in which all kinds of digital data up to a data rate of 38 Mbit/s can be transmitted at bit-error rates in the order of 10-11.

BASEBAND PROCESSING

                   The transmission techniques developed by DVB are transparent with respect to the kind of data to be delivered to the customer. They are capable of making available bit streams at (typically) 38 Mbit/s within one satellite or cable channel or at 24 Mbit/s within one terrestrial channel. On the other hand a digital video signal created in today’s TV studios comprises of 166 Mbit/s and thus cannot possibly be carried via the existing media. Data rate reduction or “source coding“therefore is a must for digital television.

            One of the fundamental decisions which were taken during the early days of DVB was the selection of MPEG-2 for the source coding of audio and video and for the creation of programme    elementary   streams, transport streams etc. - the so-called systems level. Three international standards describe MPEG-2 systems, video and audio. Using MPEG-2 a video signal can be compressed to a datarate of for example 5 Mbit/s and still can be decompressed in the receiver to deliver a picture quality close to what analogue television offers today.

            The term “channel coding” is used to describe the algorithms used for adaptation of the source signal to the transmission medium. In the world of DVB it includes the FEC (Forward Error Correction) and the modulation as well as all kinds of format conversion and filtering.



DVB-S2: THE SECOND GENERATION STANDARD FOR SATELLITE BROAD-BAND SERVICES



                        DVB-S2 is a digital satellite transmission system developed by the DVB Project. It makes use of the latest modulation and coding techniques to deliver performance that approaches the theoretical limit for such systems. Satellite transmission was the first area addressed by the DVB Project in 1993 and DVB standards form the basis of most satellite DTV services around the world today, and therefore of most digital TV in general. DVB-S2 will not replace DVB-S in the short or even the medium term, but makes possible the delivery of services that could never have been delivered using DVB-S. The original DVB-S system, on which DVB-S2 is based, specifies the use of QPSK modulation along with various tools for channel coding and error correction. Further additions were made with the emergence of DVB-DSNG (Digital Satellite News Gathering), for example allowing the use of 8PSK and 16QAM modulation. DVB-S2 benefits from more recent developments and has the following key technical characteristics:

DVB-S2 uses a very powerful Forward Error Correction scheme (FEC), a key factor in allowing the achievement of excellent performance in the presence of high levels of noise and interference. The FEC system is based on concatenation of BCH (Bose-Chaudhuri-Hcquengham) with LDPC (Low Density Parity Check) inner coding.

Video Encoder and Encoding




                   An Encoder is a device used to convert analog video signals into digital video signals. Most of them compress the information so it can be stored or transmitted in the minimum space possible. To achieve this it takes advantage of video sequences that have spatial or temporal redundancy. Therefore, eliminating redundant information obtains that encode information more optimal. The spatial redundancy is erased with DCT coefficient coding. To delete temporal redundancy is used the motion compensation prediction, with motion estimation between successive blocks.

The operation method is:

Ø  Signals are separated luma (Y) and chroma (C).

Ø  Find the error of estimation to make the DCT.

Ø  The coefficients are quantified and entropy coded (VLC).

Ø  Coefficients are multiplexed and passed to the buffer. The buffer controls the quality of signal.

Ø  Check that the outflow bit stream of the buffer is not variable, because the signal   is thought to be transmitted on a channel with a steady speed.

Ø  The quantified image is reconstructed for future reference for prediction and motion estimation.

The DCT algorithm and the block quantification can cause visible discontinuities at the edges of the blocks leading to the known “Blocking effect”, because the DCT omits the 0 in the matrix, so may produce imperfections. As a result of, new standard video coding like H.264/MPEG-4 ACV, includes filter algorithms able to decrease that effect.

Digital Television Video Codecs


Motion Picture Standards and Compression Techniques


Here is a list of the different video coding standards:

Ø  MPEG-1: Is the standard of audio and video compression. Provides video at a resolution of 350x240 at 30 frames per second. This produces video quality slightly below the quality of conventional VCR videos. Includes audio compression format of Layer 3 (MP3).

Ø  MPEG-2: audio and video standard for broadcast of television quality. Offers resolutions of 720x480 and 1280x720 at 60fps with audio CD quality. Matches most of TV standards even HDTV. The principal use is for DVDs, satellite TV services and digital TV signals by cable. An MPEG-2 compression is able to reduce a 2 hour video to few gigabytes. While decompressing a MPEG-2 data stream no needs much computer resources, the encoding to MPEG-2 requires more energy to the process.

Ø  MPEG-3: Designed for HDTV but was replaced for MPEG-2

Ø  MPEG-4: Standard algorithm for graphics and video compression based on MPEG-1, MPEG-2 and Apple QuickTime technology. The MPEG-4 files are smaller than JPEG or QuickTime, therefore are designed to transfer video and images through a narrow bandwidth and sustain different mixtures of video with text, graphics and 2D or 3D animation layers.

Ø  MPEG – 7: Formally called Multimedia Content Description Interface, supplies a set of tools for multimedia content. Performed to be generic and not aimed at a specific use.

Ø  MPEG – 21: Allow a Rights Expression Language (REL) and Rights Data Dictionary. Describes a standard that defines the description of the content and the processes to access, search, store and protect the copyright of the content discordant with other MPEG standards that define compression coding methods. The above-mentioned are the standard but each one has specific parts depending on the use.



Among these types the most important contemporaneously are:

Ø  MPEG-2

Ø  MPEG-4 →. Technologically called MPEG-4 H.264 / AVC.





MPEG-2 (H.262)




                  

MPEG-2 is a standard for “the generic coding of moving pictures and associated audio information”. Is an extension of the MPEG-1 international standard for digital compression of audio and video signals created to broadcast formats at higher bit rates than MPEG-1. Initially developed to serve the transmission of compressed television programs via broadcast, cablecast, and satellite, and subsequently adopted for DVD production and for some online delivery systems, defines a combination of lossy video compression and lossy audio data compression using the actual methods of storage, like DVDs or Blu-Ray, without a bandwidth restriction.

The main characteristics are:

Ø  New prediction modes of fields and frames for interlaced scanning.

Ø  Improved quantification.

Ø  The MPEG-2 transport stream permits the multiplexing of multiple programs.

Ø  New intra-code variable length frame (VLC). Is a code in which the number of bits used in a frame depends on the probability of it. More frame probability implies more bits intended by frame. Strong support for increased errors.

Ø  Uses the discrete cosine transform algorithm and motion compensation techniques to compression.

Ø  Provides for multichannel surround sound coding. MPEG-2 contains different standard parts to suit to the different needs. Also annexes various levels and profiles.



MPEG-2 FUNDAMENTALS

         

                   Nowadays, a TV camera can generate 25 pictures per second, i.e., a frame rate of 25Hz. But in order to convert it to a digital television is necessary to digitalize the pictures in order to be processed with a computer. An image is divided in two different signals: luminance (Y) and chrominance (UV). Each image has one luma number and two chrominance components. The television colour signal Red-Green-Blue (RGB) can be represented with luma and chrominance numbers. Chrominance bandwidth can be reduced relative to the luminance signal without an influence on the picture quality.



An image can also be defined with a special notation (4:2:2, 4:2:0). These are types of chroma sub-sampling relevant to the compression of an image, storing more luminance details than colour details. The first number refers to the luminance part of the signal, the second refers to the chroma. In 4:2:2 luminance is sampled 4 times while the chroma values are sampled twice at the same rate. Being a fact that the human eye is more sensitive to brightness than colour, chroma is sampled less than luminance without any variation for the human perception. Those signals are also partitioned in Macro blocks which are the basic unit within an image. A macro block is formed by more blocks of pixels. Depending on the codec, the block will be bigger or smaller. Normally the size is a multiple of 4. MPEG-2 coding creates data flow by three different frames: intra-coded frames (I frames), predictive-coded frames (P-frames), and bidirectional-predictive-coded frames (B-frames) called “GOP structure” (Group of Pictures structure).

Ø  I-frame: Coded pictures without reference to others. Is compressed directly from a original frame.

Ø  P-frame: Uses the previous I-frame or P-frame for motion compensation. Each block can be predicted or intra-coded.

Ø  B-frame: Uses the previous I or P picture and offers the highest compression. One block in a B-picture can be predicted or intra-coded in a forward, backward or bidirectional way. A typical GOP structure could be: B1 B2 I3 B4 B5 P6 B7 B8 P9 B10 B11 P12. I-frames codes spatial redundancy while B-frames and P-frames code temporal redundancy. MPEG-2 also provides interlaced scanning which is a method of checking an image. The aim is to increase the bandwidth and to erase the flickering showing the double quantity of images per second with a half frame rate. For example, produce 50 images per second with a frame rate of 25Hz. The scan divides a video frame in two fields, separating the horizontal lines in odd lines and even lines. It enhances motion perception to the viewer. Depending on the number of lines and the frame rate, are divided in:

Ø  PAL / SECAM: 25 frames per second, 625 lines per frame. Used in Europe.

Ø  NTSC: 30 frames per second, 525 lines per frame. Used in North America.

MPEG-2 encoding is organized into profiles. A profile is a "defined subset of the syntax of the specification". Each profile defines a range of settings for different encoder options. As most of settings are not available and useful in all profiles, these are designated to suit with the consumer requirements. A computer will need a hardware specific for the use, the same with a television or a mobile phone, but it would be capable to rate it in a particular profile. Then an encoder is needed to finish the compression.





MPEG-2 COMPRESSION BASICS




Spatial Redundancy:

                   A technical compression type which consists of grouping the pixels with similar properties to minimize the duplication of data in each frame.

Involves an analysis of a picture to select and suppress the redundant information, for instance, removing the frequencies that the human cannot percept. To achieve this is employed a mathematical tool: Discrete Cosine Transform (DCT).





Intra Frame DCT Coding:

                   The Discrete cosine Transform (DCT) is a based transform with Fourier Discrete Transform with many applications to the science and the engineering but basically is applied on image compression algorithms. DCT is employed to decrease the special redundancy of the signals. This function has a good energy compaction property and so on accumulates most of the information in few transformed coefficients. In consideration of this the signal is converted to an new domain, in which only a little number of coefficients contain most of the information meanwhile the rest has got unappreciated values. In the new domain, the signal will have a much more compact representation, and may be represented mainly by a few transform coefficients. It is independent of the data. The algorithm is the same, regardless of the data applied in the algorithm. It is a lossless compression technique (negligible loss).The DCT is capable to interpret the coefficients in a frequency point. As a result of that, it can take a maximum of compression capacity profit. The result of applying DCT is an 8x8 array composed of distinct values divided in frequencies:

Low frequency implies more sensitive elements for the human eye.

High frequency means less cognizant components.



Temporal Redundancy:

                   Temporal compression is achieved having a view in a succession of pictures.

Situation: An object moves across a picture without movement. The picture has all the information required until the movement and is not necessary to encode again the picture until the alteration. Thereafter, is not necessary to encode again all the picture but only the part that contains the movement owing that the rest of the scene is not affected by the moving object because is the same scene as the initial picture. The notation with is determined how much movement is contained between two successive pictures is motion compensated prediction.

As a result of isolating a picture is not a good fact because probably an image is going to be constructed from the prediction from a previous picture or maybe the picture may be useful to create the next picture.







Motion Compensated Prediction:

                   Identify the displacement of a given macro block in the current frame respect from the position it had in the frame of reference.

The steps are:

Ø  Search for the same macro blocks of the frame to be encoded in the frame of

reference.

Ø  If there is not the same macro block then the corresponding motion vector is

encoded.

Ø  The more similar macro block (INTER) is chosen and later on is necessary to

encode the motion vector.

Ø  If there is no similar block (INTRA) these block is encoded using only the spatial             redundancy.



H.264 / MPEG-4 AVC


                  

H.264 or MPEG-4 part 10 defines a high-quality video codec compression developed by the Video Coding Expert Group (VCEG) and the Motion Picture Experts Group (MPEG) in order to create a standard capable of providing good quality image, but using rates actually lower than in previous video standards such as MPEG-2 and without increasing the complexity of its design, since otherwise it would be impractical and expensive to implement.  A goal that is proposed by its creators was to increase its scope, i.e., allow the standard to be used in a wide variety of networks and video, both high and low resolution, DVD storage, etc.





In December 2001 came the Joint Video Team (JVT) consisting of experts from VCEG and MPEG, and developed this standard to be finalized in 2003. The ISO / IEC (International Organization for Standardization / International Electro technical Commission) and ITU-T (International Telecommunication Union-Telecommunication Standardization Sector) joined this project. The first is responsible of the rules for standards by focusing on manufacturing and the second focuses mainly on tariff issues. The latter planned to adopt the standard under the name of ITU-T H.264 and ISO / IEC wanted to name him MPEG-4 Part 10 Advanced Video Codec (AVC), hence the name of the standard. To set the first code they firstly based on looking at the previous standard algorithms and techniques to modify or if not create new ones:

Ø  DCT structure in conjunction with the motion compensation of previous versions was efficient enough so there was no need to make fundamental changes in its structure.

Ø  Scalable Video Coding: An important advance because it allows each user, regardless of the limitations of the device, receives the best possible quality, issuing only a single signal. This is possible because it provides a compressed stream of video and users can take only what you need to get a better video quality according to their technical limitations of receipt.



The MPEG-4 has more complex algorithms and better benefits giving a special quality improvement, which provides a higher compression rate than MPEG-2 for an equivalent quality.



MAIN ADVANTAGES

                  

For the MPEG-4 AVC the main important features are:

1. Provides almost DVD quality video, but uses lower bit rate so that it's feasible to transmit digitized video streams in LAN, and also in WAN, where bandwidth is more critical, and hard to guarantee.

2. Dramatically advances audio and video compression, enabling the distribution of content and services from low bandwidths to high-definition quality across broadcast, broadband, wireless and packaged media.

3. Provides a standardized framework for many other forms of media — including text, pictures, animation, 2D and 3D objects – which can be presented in interactive and personalized media experiences.

4. Supports the diversity of the future content market.                                   

5. Offers a variety of so-called “profiles,” tool sets from the toolbox, useful for specific applications, like in audio-video coding, simple visual or advanced simple visual profile, so users need only implement the profiles that support the functionality required.

6. Uses DCT algorithm mixed with motion compensation. It clearly shows that MPEG4 wants to be a content-based representation standard independent of any specific coding technology, bit rate, scene type of content, etc. This means it shows at the same time why and how MPEG4 is different from     previous moving pictures coding standards.

8. Low latency



            The most important and relevant are:

1. Reduces the amount of storage needed

2. Increases the amount of time video can be stored

3. Reduces the network bandwidth used by the surveillance system

Analog TV Transmitter


Analog TV Transmitter



Basic block diagram of a television transmitter showing how the composite video signal from the camera is combined with the frequency modulated sound signals and used to amplitude modulate the RF carrier frequency. The vestigial sideband filter is used to conserve the occupied channel space to satisfy the allocated channel bandwidth requirements.

Composite video from camera



Composite video contains all the information that a TV or video monitor needs to present an image on the screen. The colour or chroma burst signal is present on the back porch area of the horizontal sync block. Although this is a black and white image the chroma burst will still be present.



Audio Signal
Sound processing




The stereo sound signals from the program are processed here to be applied to the sound sub-carriers for modulation.


The main sound signal is frequency modulated onto an RF sub-carrier. This is a sine wave signal centred at 5.5 MHz and deviating by +/- 50 KHz on maximum sound levels. As part of the full video signal the main sound sub-carrier has a power level approximately 13 dB below the main vision carrier signal.

Full TV signal combiner

 The composite video and both sound sub-carriers form the full TV signal and are brought together in the combiner to be applied to the amplitude modulator as a complete TV signal .  Figure  shows the frequency spectrum of the full television signal being applied to the amplitude modulator of the RF carrier frequency. This television signal contains composite video, the main sound signal and the second sound signal sub-carriers.


RF carrier frequency generator




Each TV channel in the system has its own carrier frequency to be modulated with its TV program. Refer to the list of RF carrier frequencies allocated to the TV channels.

Carrier wave
The TV viewers around the transmission area are able to select the desired program by tuning their receivers to the carrier frequency of that TV station and rejecting all others. Once the channel frequency is selected the full TV signal is extracted from it as composite video and sound The carrier originates as a constant level sine wave signal and is passed through an amplitude modulator to have the TV signal added to it.




Amplitude modulator




The RF carrier signal is made to vary in amplitude following the full TV signal which is amplitude modulating it.



Figure shows that the output of the amplitude modulator for the TV signal is a full double sideband signal occupying some 11 to 12 MHz of spectrum space. A large portion of the lower sideband signal is removed by the vestigial sideband filter to limit the occupied bandwidth.



Vestigial sideband filter




The TV signal to display the grey scale step pattern has amplitude modulated the main RF carrier frequency signal and is applied to the transmitting antenna. This can be seen at below figure. Using amplitude modulation the instantaneous carrier level follows the modulating video signal. Here the sync tip corresponds to the peak carrier amplitude, and white level the minimum. This follows the principles of negative modulation.

The vestigial sideband filter removes a large portion of the lower sideband information, leaving only the lower frequencies up to 1.25 MHz. See TP5 F. This is to reduce the occupied bandwidth and remain within the 7 MHz allocated to each channel. TV signal transmissions use a vestigial sideband method which is somewhere between single sideband (SSB) and double sideband (DSB). The term vestigial comes from the word vestige meaning 'something that remains of a previous existence'. In this case only part of the lower sideband remains.

Friday, 30 December 2016

ANALOG TELEVISION TRANSMISSION

In TV broadcast both the sound signal and the video signal are to be conveyed to the viewer using radio frequency.  These two signals have very distinct features. The audio signal is a symmetrical signal without continuous current but the frequency does not exceed 20 kHz. The video signal consists of a logical component, the sync and the field sync and an analogue part according to the line picture scanning. This unsymmetrical signal thus has a continuous component.  The frequency bandwidth also extends from 0 to 5 MHz. The two signals modulate the carrier waves whose frequencies and types of modulations are as per established standards.  These modulated carriers are further amplified and then diplexed for transmission on the same line and antenna.  This technique is used with High Power TV Transmitters.  However for LPTs i.e. transmitters operating at sync peak power less than 1 kW, both the signals (video and audio) are modulated separately (In most of the present day TV transmitters the picture signal is amplitude modulated while the audio signal is frequency modulated) but amplified jointly using common vision and aural amplifiers.  Both of these systems have merits and demerits.  In the first case (separate amplification) special group delay equalisation circuit is needed because of errors caused by diplexer while in the second case inter modulation products are more prominent and special filters for suppressing them are required.  Hence technique of joint amplification is suitable only for LPTs and not for HPTs.
Though frequency modulation has certain advantages over amplitude modulation, its use for picture transmission is not permitted due to large bandwidth requirements, which is not possible due to very limited channel space available in VHF/UHF bands.  Secondly as the power of the carrier and side band components go on varying with modulation in the case of FM, the signal with frequency modulation after reflection from nearby structures at the receiving end will cause variable multiple ghosts, which will be very disturbing.  Hence use of FM for terrestrial transmission of picture signal is not permitted.
Thus amplitude modulation is invariably used for picture transmission while frequency modulation is generally used for sound transmission due to its inherent advantages over amplitude modulation.  As the picture signal is unsymmetrical, two types of modulation is possible.

i)  Positive modulation
Wherein the increase in picture brightness causes increase in the amplitude of the modulation envelope.
 ii) Negative modulation
The increase in picture brightness causes reduction in carrier amplitude i.e. the carrier amplitude will be maximum corresponding to sync tip and minimum corresponding to peak white.
In television though positive modulation was adopted in initial stages, negative modulation is generally adopted (PAL’B uses negative modulation) now a days, as there are certain advantages over positive modulation.
 Advantages of Negative Modulation
 i)Impulse noise peaks appear only in black region in negative modulation.  This black noise is less objectionable compared to noise in white picture region.
ii) Best linearity can be maintained for picture region and any non-linearity affects only sync which can be corrected easily.
iii) The efficiency of the transmitter is better as the peak power is radiated during sync duration only (which is about 12% of total line duration).
iv) The peak level representing the blanking or sync level may be maintained constant, thereby providing a reference for AGC in the receivers.
v)  In negative modulation, the peak power is radiated during the sync-tip.  As such even in case of fringe area reception, picture locking is ensured, and derivation of inter carrier is also ensured.

VESTIGIAL SIDE BAND TRANSMISSION

Another feature of present day TV Transmitters is vestigial side band transmission. If normal amplitude modulation technique is used for picture transmission, the minimum transmission channel bandwidth should be around 11 MHz taking into account the space for sound carrier and a small guard band of around 0.25 MHz.  Using such large transmission BW will limit the number of channels in the spectrum allotted for TV transmission.  To accommodate large number of channels in the allotted spectrum, reduction in transmission BW was considered necessary.  The transmission BW could be reduced to around 5.75 MHz by using single side band (SSB) AM technique, because in principle one side band of the double side band (DSB) AM could be suppressed, since the two side bands have the same signal content.
It was not considered feasible to suppress one complete side band due to difficulties in ideal filter design in the case of TV signal as most of the energy is contained in lower frequencies and these frequencies contain the most important information of the picture.  If these frequencies are removed, it causes objectionable phase distortion at these frequencies which will affect picture quality. Thus as a compromise only a part of lower side band is suppressed while taking full advantage of the fact that:
 i) Visual disturbance due to phase errors are severe and unacceptable where large picture areas are concerned (i.e. at LF) but
ii) Phase errors become difficult to see on small details (i.e. in HF region) in the picture.  Thus low modulating frequencies must minimize phase distortion where as high frequencies are tolerant of phase distortions as they are very difficult to see.
The radiated signal thus contains full upper side band together with carrier and the vestige (remaining part) of the partially suppressed LSB.  The lower side band contains frequencies up to 0.75 MHz with a slope of 0.5 MHz so that the final cut off is at 1.25 MHz.

 RECEPTION OF VESTIGIAL SIDE BAND SIGNALS

Corresponding to the VSB characteristics used in transmission an amplitude versus frequency response results.  When the radiated signal is demodulated with an idealized detector, the response is not flat.  The resulting signal amplitude during the double sideband portion of VSB is exactly twice the amplitude during the SSB portion.



 This characteristic is shown in Fig.
Fig- Response for VSB reception
 
In order to equalize the amplitude, the receiver response is designed to have an attenuation characteristics over the double side band region appropriate to compensate for the two to one relationship.
This attenuation characteristic, the so-called nyquist slope, is assumed to be in the form of a linear slope over the + 750 kHz (DSB region) with the visual carrier located at the mid point (-6 dB point) relative to SSB portion of the band. Such a characteristic exactly compensates the amplitude response non-symmetry due to VSB.
Typical receiver characteristics
Modern practice for purposes of circuit and IF filter design simplification, also provides an attenuation of the upper end of the channel such that colour sub carrier is also attenuated by 6 dB as shown in Fig.
Typical receiver IF characteristics
TV receivers have Nyquist characteristics for reception which introduces group delay errors in the low frequency region.  Notch filters are used in receivers as aural traps in the vision IF and Video amplifier stages.  These filters introduce GD errors in the high frequency region of the video band.  These GD errors are pre-corrected in the TV transmitters (using RX pre corrector) so that economical receiver filter design is possible.  The group delays of the RX and TX with pre-correction are shown in Fig.
 
 
Group delay curves
Depth of Modulation
Care must be taken to avoid over-modulation at peak-Luminance signal values to avoid picture distortions and interruptions in vision carrier. The peak white levels when over modulated tend to reduce the vision carrier power or even cause momentary interruptions of vision carrier.  These periodic interruptions due to accidental over modulation result in interruptions of the sound carrier in inter carrier receiver systems which produces undesired sound buzz in the receiver output.
 Therefore, to prevent this effect, the maximum depth of modulation of the visual carrier by peak white signal values is specified as being 87.5%.  This 12.5% residual carrier (white level) is required because of the inter-carrier sound method used in TV receiver .
Carrier signal and modulation envelop
The depth of modulation is set by using a ramp signal or step signal as given in the manual.  It should be 87.5% for 100% modulation (i.e. m = 1).
Inter Carrier
 The TV receivers incorporate inter carrier principle.  According to our system, the inter-carrier i.e. the difference between the vision transmitter frequency and sound transmitter frequency is 5.5 MHz.  Hence it is to be ensured that even when the modulating video signal is at white peak, 12.5% of residual carrier is left so that sound can be extracted even at the peak white level, where the carrier power is minimum.

Power Output

 The peak power radiated during the sync. tip or sometimes the carrier power corresponding to black level is designated as the vision transmitter power.  This power is measured by using a thruline power meter after isolating the aural carrier.  The power read on thruline meter is multiplied by a factor of 1.68 to get the peak power (vision) radiated.  As transmitter output is connected to an antenna, having a finite gain, the effective radiated power (ERP) is obtained by multiplying the peak power by the antenna gain (w.r.t a half wave dipole).  Hence a 100 W LPT using transmitting antenna having a gain of 3 dB w.r.t a half wave dipole will have an ERP of 200 W or 53 dBm or 23 dBW.
 In TV broadcasting, the sound signal is transmitted by frequency modulating the RF sound carrier in accordance with the standards. The sound carrier is 5.5 MHz above the associated vision carrier. The maximum frequency deviation is + 50 kHz which is defined as 100 per cent modulation in PAL-B system.  In the case of NTSC, the maximum deviation permissible is + 25 kHz.

Standards

The characteristics of the TV signal in sections 1 and 2 refer to CCIR B/G standards.  Various other standards are given in Table .
 Table 1
Vision/sound carrier spacing
channel width
Frequency Range
Vision sound carrier spacing
5.5 MHz
Channel width
7 MHz (B) VHF
8 MHz (G) UHF
Sound Modulation
FM
FM deviation (maximum)
+ 50 kHz

Standards
American
European PAL
No. of lines per frame
525
625
No. of frames per second
30
25
Field Frequency Hz
60
50
Line Frequency Hz
15750
15625
Channel  MHz
6
7
Video BW, MHz
4.2
5
Colour subcarrier, MHz
3.58
4.43
Sound System
FM
FM
Maximum sound deviation kHz
25
50
Intercarrier frequency MHz
4.5
5.5