Hello Editsuite.com friends,

Due to tons of abuse, we now require that you request user access by sending us your Login, Name, Email Address, Phone Number, and Profession by submitting that info HERE.  I'll review your request and try to get back to you within the week.  You can't imagine how many folk want to trash forums with bogas advertising. 

Also, please help us gain enough Facebook "Likes" to have a custom Facebook URL!  

--Gary Lieberman

GPI trigger and robo peds

9 replies [Last post]
jumboricky
User offline. Last seen 14 years 25 weeks ago. Offline
Joined: 28 Jul 2006

Is it possible to trigger an older vinten robo ped via GPI.

Here's why:

I'm working on a show where the opening shot requires a camera to move and the "live" background to move (via transform engine) in sync with the camera. Simply put, the robo op zooms in as I move the background closer via the TE engine. I made a simple timeline that has the same duration as the robo op's move duration. It looks pretty good when the cam op. and myself move in unison.

That said...

Is there another "less expensive" approach to this. This is not a multi-camera show so I don't have to worry about that aspect of the problem. What about the technology that's used during baseball games when you see the sponsership behind the catcher, etc?

Thanks for your input.

- Dave

brad fisher
User offline. Last seen 14 years 25 weeks ago. Offline
Joined: 20 Aug 2005
I remember seeing a demo of some software from Namadgi (I think it's an Australian company) where you went to a wide shot, pressed a "register" button, and the inserted logo/advertisement etc would track your image through some clever edge-detect processing. It worked with pan, tilt and zoom, as long as you didn't pan away from every point of reference in your reference frame. Since what you want is a "simple" camera zoom and background, I wonder if you've considered doing the camera zoom as a Transform Engine zoom instead? (I assume there's some "real" content to your camera such as a desk, and not just a reader floating on a Chromakey background). The success will depend on how wide you need to start and how tight you need to end. * Zoom your camera to its widest shot. Take a freeze on the Still Store. Send this through a Transform Engine (#1). * Zoom your camera to its tightest shot, lock it off, and channel it through a Transform Engine (#2). Make the edges soft. The framing of this tight shot might be non-standard (ie, not proper headroom) to give you more "body", which will be re-framed by the TE. (I wish I could draw diagrams in this forum!) * Send the background through a Transform Engine (#3). * Using Target Locate and Size, position TE#2 over TE#1 so that your live camera is "squeezed back" and the Still Store freeze is providing the extra picture around the edges. * Do the chromakey so TE#3 is framed appropriately. * Using Global, zoom all three TEs so that your end keyframe has TE#2 restored to a full-frame image (the soft edges will probably still be there, and the "zoomed-in" surround will be seen at the edges). * Once you have cut away from the shot, change to a less-complex composite without all the TEs. If your foreground is just a reader on chromakey, you may be able to dispense with TE#1. The goal in all this rigmarole is so that you are not using the TE of your reader to expand beyond 100% size. Imagine the reader has full chromakey behind them. If you normally had 10% headroom, you could frame the camera with no headroom, and slide the TE down by 10%. Then when you zoom out in the TE, you have 10% extra "below" the bottom of frame, as well as "infinite" reframe capability above their head. In effect, you can zoom out by 20%, and still have a properly framed shot. When the TE is full size (offset downwards by 10%), you still have full resolution of your reader. You just need to make sure they don't wave their arms to the sides (fingers chopped off) when the TE is zoomed back, and don't bounce upwards in their chair (chop their head off) in either position. If you get my drift, you'll see this is by far a "less expensive" method ... it's free! brad fisher
Bill D
User offline. Last seen 10 years 3 weeks ago. Offline
Joined: 18 Aug 2005
So its sounds like that the green or blue wall just helps things and maybe doesn't require all that calibration that otherwise would be necessary. Either way one of the coolest innovations in TV in a while I think. Are they trying to improve the processing on the HD stuff so there isn't such a delay or will that always be the case? thanks Bill
jonas
User offline. Last seen 14 years 25 weeks ago. Offline
Joined: 21 Aug 2005
http://www.digitalbroadcasting.com/Content/ProductShowcase/product.asp?DocID={D729D7D4-BF95-11D4-8C7F-009027DE0829}&VNETCOOKIE=NO just like mike said, special pan heads and a lot of mapping...
Mike Cumbo
User offline. Last seen 2 years 45 weeks ago. Offline
Joined: 18 Aug 2005
Bill, we used a PVI system that did not require a green screen for a early season college hoops tourney that I have done. Calibration of the pan head and lens was necessary of course. The logos just floated on the court.
Bill D
User offline. Last seen 10 years 3 weeks ago. Offline
Joined: 18 Aug 2005
[quote="Rick Tugman"][quote="jumboricky"]What about the technology that's used during baseball games when you see the sponsership behind the catcher, etc?[/quote] Dave: What you see behind the catcher on a FOX Major League Baseball game is basically a green screen. It is processed through a computer program which is calibrated through the cameras lense that creates the effect you see. This is all set up ahead of time and the calibration process takes some time to get right. Rick.[/quote] When this technology first came out I thought that having a green or blue wall was not necessary? I remember the CBS early show (network) using it to put their logos on buildings and such (while the camera pans). I never like seeing the green or blue wall while watching the center field tight replay. Sure no other viewer cares. Although it works at Fenway or Yankee Stadium because their walls are already those colors. Is the green of a football field used as well or is that setup differently? Bill
jonas
User offline. Last seen 14 years 25 weeks ago. Offline
Joined: 21 Aug 2005
not to nitpick, but on the CBS 'a' games PVI is on site and gives us the ability to preview the effects. there is a delay as Rick says... jonas
jumboricky
User offline. Last seen 14 years 25 weeks ago. Offline
Joined: 28 Jul 2006
I will look into the PVI suggestion. I have a feeling it's too pricey for what our production would be willing to spend. Big surprise, huh. Thanks for info. - Dave
JohnHowardSC
User offline. Last seen 14 years 25 weeks ago. Offline
Joined: 21 Aug 2005
"and I'm not even sure what system it is because it is not on site" Not sure what CBS uses, but ABC is (or was) using PVI. Our college football virtuals were all added from New York. During transmission we just have the 3 up cameras frame the field a certain way and the PVI peeps make their marks.
John Howard Independent Technical Director Columbia, SC
Rick Tugman
Rick Tugman's picture
User offline. Last seen 10 years 9 weeks ago. Offline
Joined: 4 Sep 2005
[quote="jumboricky"]What about the technology that's used during baseball games when you see the sponsership behind the catcher, etc?[/quote] Dave: What you see behind the catcher on a FOX Major League Baseball game is basically a green screen. It is processed through a computer program which is calibrated through the cameras lense that creates the effect you see. This is all set up ahead of time and the calibration process takes some time to get right. There is also an audio delay associated with this process which is corrected with a tally gate that is processed by the engineers and the audio technician. It's more complicated than it looks and it takes several people and their respective departments to make sure it's right. FOX uses the SportsVision & PVI system. If you need more information on this system I can put you in touch with them as I see them every week for FOX baseball. CBS uses a system that is downstream of the remote truck. The actual process is done in New York and I'm not even sure what system it is because it is not on site. The one advantage to the CBS way of doing things is, when working their events, the remote location doesn't have to worry about losing the virtual signal and audio delays. I'm pretty sure the audio is all downstream of the remote. Good luck. Rick.