I would like to export simple graphics with shading in a vector graphics format, preferably PDF. This should certainly be possible with the pdf format as I quote from wikipedia (en.wikipedia.org/wiki/Portable_Document_Format#Vector_graphics):
I think I am correct in stating that Mathematica supports a more recent version of pdf than 1.3 so where is the problem? Is there a workaround? My graphics are still simple 2d lines but such that vector formats are far preferable to rasters.
Taking on board the comments of @Szabolcs and @Jens I've tried a few more things out. Specifically I have compared exporting the line as shown above, and a line created using the Polygon function instead of Line (similar therefore to @Szabolcs triangle). I have exported both as PDF, EPS (then converted to PDF via terminal) and SVG, in all instances the option "AllowRasterization" -> False is used. What I have found is:
So a very limited workaround for the most basic shapes is to recreate them as polygons, export to EPS, then convert to PDF. This however does not allow the use of any curved elements, still unsatisfactory for my purposes. I will consider reporting this as a bug (unless anyone advises me otherwise!).
I am doing artistic reconstructions of several animal species for a research project. I am making the drawings using vector illustration in Illustrator. I want to be able to reuse these illustrations in various contexts, and that means using vextor illustrations in which I can resize line thickness, easily edit the image, etc.
However, I also want to have shading for these images to show three-dimensionality of these animal reconstructions. For this what I have been doing is three-tone shading in Photoshop. However, if the lines are not very thin it means the shading has a halo around it when exported back to Illustrator.
A related issue is if I have to adjust the image at all, that means I have to go back and reshade the entire image again. This is something that happens a lot in my field, as images always have to be adjusted as new data becomes available or if old drawings depict the animal's anatomy wrong and have to be fixed. This isn't as dire as it always sounds, for example in some cases I can get away with just altering some parts of the image, but quite often I can't salvage large parts of the previous image.
For example, I had a picture of a marine animal where I made the body too short and had to move the fins back. This meant I had to reshade the fins from scratch because even though I had separate masking layers for the fins, they had been translated and I couldn't translate the shading where it easily looked natural. The shape of the fin and it's shading was completely unchanged, but I couldn't use the old shading because I couldn't easily match the old shading up to the new lines.
I'm wondering if I'm going about this the wrong way, and there is some more efficient way to shade vector art in Illustrator. I don't have a problem with shading in Photoshop, I am just trying to figure out if there is a way to avoid reinventing the wheel and having to reshade the same image over and over again every time if the image has to change, or if I want to have the same image at a different line thickness.
Except then it turns out that the dimensions of the fish have to be adjusted. This happens quite a bit as many of the major dimensions (length, etc.) are calculated using mathematical formulas, and other anatomical features like the shape of the jaws and head, etc., might have to be adjusted as more data comes in or if an error gets discovered (and because a lot of this is anatomical reconstruction, we don't have an actual animal we can just draw off of, we have to reconstruct things). E.g., I drew the mouth wrong on one based on bad data and had to fix it, but had no way of knowing at the time.
The issue is that most of the fish has not changed. The shape of the tail and dorsal fin has not changed, it is just the body that has stretched out. The issue I am running into is I cannot translate the shading around and reshade just the altered areas of the fish, I have to reshade the entire fish.
But you need to start manipulating the nodes. In the second example, I just changed the position of one node. And in the third example, I also modified the length of the handlers. making "sharper edges"
I think much of this shading could easily be accomplished via Gradient Meshes. Note that Gradient Meshes are not the same as "gradients". They are more complex and can take more effort to understand and utilize accordingly.
Vector + sun:
note that the top face of skewed cube is lighter than the same face on the other vector view.
That means that the vector render of a transformed object depends on the view even if sunlight is enabled
So Sketchup, after a skew transform applied to a group, is rotating the group axes, while they should remain in the original position, otherwise the face color is determined by a wrong axes orientation + face orientation.
I'm trying to figure out some simple concepts about image based lighting for PBR. In many documents and code, I've seen the light direction (FragToLightDir) being set to the reflection vector (reflect(EyeToFragDir,Normal)). Then they set the half vector to the mid-way point between the light and view direction: HalfVecDir = normalize(FragToLightDir+FragToEyeDir); But doesn't this just result in the half vector being identical to the surface normal? If so, this would mean that terms like NDotH are always 1.0. Is this correct?
Here is another source of confusion for me. I'm trying to implement specular cube maps from the app Lys, using their algorithm for generating the correct roughness value to use for mip-level sampling based on roughness (here: =specular_lys#pre-convolved_cube_maps_vs_path_tracers in the section Pre-convolved Cube Maps vs Path Tracers). In this document, they ask us to use NDotR as a scalar. But what is this NDotR in respect to IBL? If it means dot(Normal,ReflectDir), then isn't that exactly equivalent to dot(Normal,FragToEyeDir)? If I use either of these dot product results, the final result is too glossy at grazing angles (when compared to their more simplistic conversion using BurleyToMipSimple()), which makes me think I'm misunderstanding something about this process. I've tested the algorithm using NDotH, and it looks correct, but isn't this simply canceling out the rest of the math, since NDotH==1.0? Here is my very simple function to extract the mip level using their suggested logic:
Edit: Just to make sure I'm clear, I'm referring to pure image based lighting, with no directional lights, no spot lights, etc. Just a cube map that lights the whole scene, similar to the lighting in apps like Substance Painter and Blender's Viewport shading mode.
I'm not familiar with this particular app, but it looks like you're on the right track here. Part of the advantage of pre-convoluting the cube maps is to customize each pixel to be the light source for a particular reflection vector, so indeed NdotV is identical to NdotR as you've noticed. The R still needs to be calculated, for the texture lookup, so it doesn't matter much which one you use for the dot. There is no such thing as H or NdotH used for IBL lookups; those are only for point lights.
Hello. I've been "volunteered" to recreate this logo for a company that cannot find their vector original. I have the shaped, and have been playing around with gradients and masks and 3D, but can't figure out how to add the shading. It's not just one-sided shading as there is a slight shade around additional edges. Since it's a logo, I have to be precise about it. Anyone? It will be used on very large displays, so I really don't want to go with raster.
In applications where raster-based effects like those inner shadows are desired, it is best practice to add them in a raster editing environment (Photoshop). You would rasterize the logo at a size and resolution appropriate for the application, then apply the effects.
Thank you. I get the "how-to", but am still wondering about "best practice". We have quite a bit of artwork that has vector-added drop shadows and the like, and it's all been printing fine (on packaging) for years.
Well, it is entirely possible to add such effects in Illustrator and produce a viable result, but many people do so half-blind to the moment when their work crosses into raster-based territory, resulting in loss of control with respect to scale-ability and spot color output. My old-school mind draws a very hard line between vector and raster editing (for print), to the extent that I never mix raster and vector in Illustrator, or anywhere but in InDesign for that matter. The shadows you want will be a raster effect, even if you add them in Illustrator, essentially rasterizing the whole mark and subjecting it to resolution from that point on. Someone may argue, but IMO, it's always better to retain up-front control of that process, as described in my previous post.
True, John, I do follow what you're saying that shadows are rasterized. E.g. we always have to ensure that the Document Raster Effects setting is properly set. We're glad we get good results with that; otherwise our raster library would have to be exponentially increased. Displays are definitely a different critter. What the company had originally sent was s fuzzy, medium-resolution JPG, and that wasn't going to fly. We now have a flat AI, and are good to go. Thanks, everyone, for the interesting points of view.
I think if you play around with the Effect of Inner Glow and change the default setting from Screen to Multiple and adjust the color, opacity and blur you will get what you are looking for. This most likely is not a gradient. And I do not think it should be applied as a pixel based effect, as then it would not be scalable.