Poserverse News Page
anomalaus last edited by
@trekkiegrrrl I'm surprised they haven't monetized them already! Though having all your blueprints available would just tempt foreign manufacturers with cheap labour and robotic machine shops to undercut you globally.
PoserDirect announces Site closing
After 10 years, and running Sparkyworld since 2001, sorry to say that the hosting for PoserDirect won’t be renewed this year. So around the 19th of October 2018 Poserdirect will be closing down.
More Info here: http://poserdirect.com/
Apple runs into another embarrassing issue with the new MacBook Pro
by Bryan Clark
A series of embarrassing gaffes in Cupertino are leading many to question whether Apple has lost its eye for detail.
Just weeks after Apple acknowledged, then patched, a throttling issue in its updated line of “Pro-level” laptops, users are now reporting another problem that casts Apple‘s quality control in a less-than-favorable light: the speakers. Numerous users have taken to Reddit and YouTube to report or demonstrate the problem.
It appears that both 13 and 15-inch MacBook Pro‘s have an issue that causes a cracking sound in its speakers, a distortion of sorts that’s present even at a relatively low volume. The speakers, it’s worth noting, were advertised as taking “listening to new levels with wide dynamic range and more bass for maximum boom.” Apple also noted that these speakers were its best yet, with a direct connection to the power supply that enabled “peak amplification.”
It’s unclear at this time what’s causing the issue, although a source familiar with the matter tells TNW the company is aware of the issue, and looking into it. Finding the cause, though, could be troublesome. User reports, so far, are all over the map. Some report the issue as one that primarily happens in specific applications, like Apple Music, iTunes, or Garage Band. Others say YouTube videos have the same effect. Still others report that they’re also noticing it when using Windows via BootCamp — a problem that may point to a hardware issue, rather than a software “bug.”
eclark1849 last edited by eclark1849
8 Tips to Increase the Photo-Realism in Your Renders
by Justin Slick
Photo-realism is one of the ultimate goals for many CG artists, and it's also one of the most difficult to achieve. Even if you're relatively new to 3D computer graphics, however, today's tools and workflow techniques make photo-realism very obtainable. Here are eight techniques to help you get there:
Bevel, Bevel, Bevel
Forgetting to bevel or chamfer edges is one of the most common errors committed by beginning 3D artists. There are almost no razor-sharp edges in nature, and even most man-made objects have a slight roundness where two opposing surfaces meet. Beveling helps bring out detail, and really sells the realism of your model by allowing edges to properly catch highlights from your lighting solution.
Using the bevel (or chamfer tool in 3ds Max) is one of the first things you should learn as a modeler. If you're new enough to 3D that you're unsure how to create a beveled edge, chances are you could truly benefit from a good introductory tutorial or even a training subscription.
Learn to Use Linear Workflow
Even though linear workflow has been around for years, it's still a confusing and complicated idea for beginners. I'm won't try to completely explain the theory here (there's just too much to say), but I do want to make sure you're at least aware that these techniques exist.
The need for linear workflow essentially comes down to the fact that your monitor displays images in a different color space (sRGB) than what is output by your render engine (linear). In order to combat this, artists must take the necessary steps to apply gamma correction to a render.
But linear workflow actually goes pretty far beyond simple gamma corrections—it's all about eschewing old techniques and workarounds (most of which are based on outdated math), and moving toward true physically based lighting solutions.
There's a lot more to say about linear workflow, and thankfully it's been discussed exhaustively over the past few years. Here's a useful link for learning the theory behind the process—he links out to quite a few sources, so there's plenty of reading to be done. The second link is a Digital Tutors course that deals specifically with linear workflow in Maya 2012.
Next-gen Nvidia GPUs could use AI for rendering 3D hair models in future games
by Nathaniel Artosilla
Future Nvidia GPUs could soon use AI to improve rendering of finer 3D details such as hair. This new process of using deep learning can allow developers to render 3D hair models from a 2D reference image, the first of its kind to work in real-time.
Researchers at the University of South California, Pinscreen have developed the new process alongside Microsoft. It is expected to improve hair rendering techniques currently used by software such as Nvidia’s HairWorks, PCGamesN reported.
According to the researchers, realistic hair modeling is one of the most difficult tasks when creating digitized virtual humans. In contrast to say, the human face, hairs have a wide range of shape variations and can be highly complex thanks to its volumetric structure and even individual flaws of each strand.
The fact that researchers believe a neural network will be able to handle this task speaks volumes regarding its intricacy. Neural networks work pretty much like a brain connecting various nodes in different ways and in many layers, classifying and organizing information into different lobes or zones.
To teach this “brain,” researchers initially gave it a dataset of 40,000 different hairstyles and 160,000 2D orientation images from random viewpoints to analyze. It then proceeded to reproduce a 3D rendered hair from the images all in varying lengths, styles, and colors.
Read More Here:
Next-gen Nvidia GPUs could use AI for rendering 3D hair models in future games
These cards are already available.... And were announced over two years ago.
Now they are known as Volta cards.
This explains why they are so fast.
The consumer version isn't cheap either...
@shvrdavid This is a beast. But here in France 3200 € Was really out of my CG Budget when I was looking for GraCard's.
But still drooling after it.
@shvrdavid Apparently, I didn't post the link to the article as I had intended. I don't exactly keep up with the gaming industry So I wouldn't actually know how old that news is, BUT, the article I was linking to was dated for only about a month ago.
@eclark1849 Oddly, the Volta GPU isn't aimed at gaming at all. Yes you can use it for that and it is a very fast card, but that wasn't what the GPU was primarily designed for at all.
robertgavintd last edited by
3D Service in Canada architectural rendering and image rendering more floor plan service team designs 3D Rendering
SIGGRAPH 2018 To Spotlight Women In Computer Graphics
VANCOUVER — At SIGGRAPH 2018, numerous women from across all computer graphics and interactive techniques specialties will be spotlighted.
“In this day and age, as the voices of women continue to be elevated to new heights and their contributions to all forms of business, art, technology, and beyond are more important than ever, we are beyond thrilled to be able to add to that conversation and shine a light on the amazing work that women do for our industry,” says SIGGRAPH 2018 conference chair, Roy C. Anthony. “Among our presenter ranks are women who play a huge role in their respective fields and deserve every ounce of praise they receive. Furthermore, seven of our 2018 committee chairs, as well as our 2019 and 2020 conference chairs — Mikki Rose and Kristy Pron — are also women.”
Some of the many women who will showcase work at SIGGRAPH 2018 include:
Jen Underdahl, VFX Producer, Marvel Studios (panelist, “THE MAKING OF MARVEL STUDIOS’ ‘AVENGERS: INFINITY WAR’” Production Session)
Isabelle Langlois, Vice President of Production, Rodeo FX (panelist, “’GAME OF THRONES’ SEASON 7: ORCHESTRATING SEA BATTLES AND BLOWING UP A BIG WALL” Production Session)
Sarah Eagle Heart, CEO, Native Americans in Philanthropy (voice of "Luna" in the Immersive Pavilion’s “Crow: The Legend” and panelist, “’CROW: THE LEGEND’ &NDASH; BRINGING A NATIVE AMERICAN LEGEND INTO VR” Production Session)
Fran Kalal, Character Tailoring Lead, Pixar Animation Studios (panelist, “’INCREDIBLES 2’: SUIT UP, IT MIGHT GET WEIRD!” Production Session)
Monika Fleischmann, pioneering German research artist, digital media scientist, and curator of new media art (recipient, 2018 ACM SIGGRAPH DISTINGUISHED ARTIST AWARD FOR LIFETIME ACHIEVEMENT IN DIGITAL ART)
During recent years, the CG industry has overcome many daunting hurdles, but one that still remains is the Uncanny Valley. Indeed, we are inching closer and closer to achieving a computer-generated human that is indistinguishable from a real person. And MPC’s work for Blade Runner 2049 gained us even more ground.
How ironic it is that Blade Runner 2049 focuses on bioengineered humans made to pass as the real thing, while one of the supposed engineered characters in the movie was not an actress, but indeed a photorealistic digital model.
In the 1982 Blade Runner, Rick Deckard (Harrison Ford) is a so-called blade runner, an agent who hunts down and terminates replicants, which are androids that look like real human beings. In the course of his mission, he meets Rachael (Sean Young), a replicant who evokes human emotion, blurring the line between what’s human and what’s not.
Blade Runner 2049 picks up the story 30 years in the future, where bioengineered replicants have been integrated into society as servants and slaves.
A replicant named K is hired as a blade runner, an agent who now hunts down and terminates rogue replicants. During his pursuit, he finds the remains of Rachael, who appears to have died during an emergency C-section, indicating that replicants are capable of giving birth. K must now hunt down the replicant child, fathered by Deckard.
In this sequel, Ford reprises his role, but a CG human takes Young’s place. In the film, archival footage and stills of Young from the original Blade Runner are used to represent Rachael. Additionally, Young’s likeness was digitally superimposed onto a stand-in, to briefly portray Rachael in Deckard’s hallucination. It is also used to portray a replicant that is physically identical to the original version of Rachael with the exception of eye color.
Read More Here: http://www.cgw.com/Publications/CGW/2017/Volume-40-Issue-6-Nov-Dec-2017-/Digital-DNA.aspx
Top Magazines for 3D artists
If you go in for 3D modeling surely you often face some difficult problems, when you need advice and tips from experienced graphic professionals. Yes, you can easily find a lot of information on the Internet but still you shouldn’t forget that websites has certain limitations and they don’t always provide the full details. We believe that every 3D artist should also gain profound knowledge about the world of 3D through well edited and respectable magazines.
Information and resources the professional magazines provide do help both the young and experienced 3D artists to replace their online learning to offline. Printed stuff is often edited and researched by highly skilled editors and leading expert in the industry. That’s why today we have found and put together for you some of the magazines for digital artists that certainly contain a bunch of interesting and necessary information.
Read More here: https://hum3d.com/blog/top-magazines-for-3d-artists/
Chaos Group debuts new real-time ray tracing technology at SIGGRAPH
by Bradley Thorne
Chaos Group has given the world its first look at Project Lavina, a groundbreaking new technology for real-time ray tracing. Using the dedicated RT Core within NVIDIA’s Turing-based Quadro RTX GPUs, the project promises to fundamentally change the direction of computer graphics by introducing a new level of visual quality for real-time games, VR, and 3D visualization.
Named after the Bulgarian word for “avalanche,” Project Lavina debuted as a SIGGRAPH tech demo, depicting a massive 3D forest and several architectural visualizations running at 24-30 frames per second in standard HD resolution. Rather than using game engine shortcuts like rasterized graphics or a reduced level of detail, each scene features live ray tracing, creating truly interactive photorealism.
“We’ve been developing ray tracing technology for 20 years, and this is one of the biggest breakthroughs we’ve ever made,” says Vlado Koylazov, co-founder and CTO of Chaos Group. “Real-time and ray tracing coming together is the beginning of something big.”
Bob Pette, vice president of professional graphics for NVIDIA says: “We are thrilled to see how well Project Lavina takes advantage of the RTX stack debuting on our new Quadro RTX line. True real-time ray tracing is our goal, and it’s great to see a market leader like Chaos Group achieve it with us.”
Read More here: https://www.3dartistonline.com/news/2018/08/chaos-group-debuts-new-real-time-ray-tracing-technology-at-siggraph/
Reallusion introduces iClone Motion LIVE
Reallusion has announced the a new multi-device motion capture system – iClone Motion LIVE. Allowing users to capture across a range of motion capture gear for face, body and hands. The technology-bending platform is also fully compatible with hardware such as Faceware, Xsens, Perception Neuron, OptiTrack, Leap Motion, and more.
Motion LIVE enables actors and directors to view the capture on any 3D character in real time, enhancing their ability to see and respond to the performance. Full body or facial animation can also be controlled for multiple characters simultaneously.
Both custom imported characters and fully-rigged 3D characters can be animated in the new system, including characters from tools like Daz Studio, Character Creator, and iClone. The iClone character pipeline also has export presets that allow characters and animations to be sent from iClone to Unity, Unreal, Maya, 3DS Max, C4D, CryEngine, and Blender.
To drive the mocap devices within Motion LIVE you will need both iClone, the Motion LIVE plugin and at least one of the following: Leap Motion, Faceware, Perception Neuron, OptiTrack or Xsens.
The Motion LIVE hand motion capture solution is priced from $249 on Reallusion’s store, with the all-in-one full-body system starting from $2000, during the launch special offering period. After this period they are listed from $398 and $3,486 respectively. Everyone that buys the Motion LIVE Plug-in from Reallusion’s store will be offered a free leap Motion Hand Capture Profile worth $99.
Not really news per se.
eclark1849 last edited by eclark1849
A Kraken makeover! TurboSquid‘s new site revealed
TurboSquid is aiming to make your life a lot easier with their elegant site redesign and amazing new customer support...
Founded in the year 2000, and now boasting a library of over 650 thousand models TurboSquid has the largest marketplace for stock 3D assets around, and have been persistently pushing to make buying stock digital assets easier ever since.
As leaders in the 3D asset marketplace, TurboSquid know the value of steering a good brand (let's be honest TurboSquid is a name not easily forgotten!) Naturally the company is constantly looking for ways to advance, accelerate and generally improve its service. One of the best strategies to achieve that is to make the service as impressive and easy to use as possible; so, it's doing just that.
"The new logo and other design choices position TurboSquid to stand out as the premiere destination for 3D models in 2018 and beyond."
Alongside the company's sharp new website, the company has vastly improved the usefulness of the service with neat design changes and promising world-class support.
Nail the colors to the mast!
From top to tail (or maybe tentacle to mantle?) the site is getting a graphical overhaul, including new iconography and typography and a more cohesive color palette; all of which contributes to a vastly improved user experience and generally making it a nicer place to be.
Reallusion unveils Character Creator 3, Motion LIVE and iPhone X facial mocap at SIGGRAPH
by Bradley Thorne
Reallusion has revealed several innovations in motion capture and character creation at SIGGRAPH 2018. A new, standalone, character creation tool, Character Creator 3 debuted alongside real time motion capture solution – Motion LIVE, and LIVE Face for iPhone X.
Character Creator 3, the new generation of iClone Character Creator, will separate from iClone to become a standalone tool. It will present users with a full solution for generating optimised 3D characters, each of which will be ready for intensive artistic design with a new quad base, round-trip editing in ZBrush and photorealistic rendering using Iray.
The new tool will also provide a new game character base with topology optimized for mobile, game, AR/VR developers. By far the biggest breakthrough comes in the form of integration with InstaLOD’s model and material optimisation technology. This will generate game-ready characters that can be animated on-the-fly.
iClone Motion LIVE will connect industry-standard gear into one solution, include Rokoko, Leap Motion, Xsens, Faceware, OptiTrack, Noitom and iPhone X. The intuitive plug-and-play design will make connecting complicated mocap devices a simpler process by animating custom imported characters or fully-rigged 3D characters generated by Character Creator, Daz Studio, and other sources.
Also announced was 3D Face Motion Capture and LIVE Face for iPhone X. Granting users the ability to record instant facial motion capture on any character with their iPhone X. Expanding the technology behind Animoji and Memojis, Reallusion is looking to lift iPhone X animation and motion capture to the next level, with a tool for both studios and independent creators. Users can also blend LIVE Face with iClone Motion LIVE, utilising the power of Xsens, Perception Neuron, Rokoko, OptiTrack and Leap Motion, for full body motion capture.
How to Prepare Your Model for 3D Printing
by Justin Slick
3D printing is an incredibly exciting technology and getting to hold one of your digital creations in the palm of your hand is a fantastic feeling.
If you want to print one of your 3D models so it is transformed into a real-world object you can hold in your hands, there are a few things you should do to prepare your model for 3D printing.
To ensure that the printing process goes as smoothly as possible and to save you time and money, follow this series of steps before you send your file off to the printer:
Make Sure the Model is Seamless
Hollow the Model for Lower Cost
Eliminate Non-Manifold Geometry
Check Surface Normals
Convert Your Model
anomalaus last edited by
@eclark1849 that's interesting. Since I've recently leapt into the depths of 3D printing, it's come to my attention that most of the freely available slicing software out there is already quite capable of automatically and configurably coping with non-manifold, unwelded figures, with variable amounts of infill. I did absolutely nothing except remove extraneous props from the figure I exported as obj from Poser and 3D printed at home. The slicing software dealt with everything else.